Carbohydrate counting (or carb counting) is an important tool for people living with type 1 diabetes (T1D) to calculate insulin needs and manage blood sugar following a meal or snack.
However, carb counting is a difficult task and especially challenging when eating out or eating non-prepackaged or unlabelled foods. This is specifically important as people living with T1D are typically encouraged to eat whole foods rather than prepackaged, processed ones. Even with new technologies like the artificial pancreas (also called hybrid closed loop systems) which can now manage small changes in blood glucose with fast acting insulin injections, accurate carb estimates are still essential to calculate bolus insulin.
With the rapid rise of artificial intelligence (AI) and the increasing use of AI chatbots (e.g., ChatGPT), an important question emerges: could AI accurately count carbs in everyday meals for people living with T1D?
Can generative AI help with carb counting?
AI tools are increasingly being used to answer everyday health questions. One study tested how 2 popular AI models, ChatGPT4o and Gemini Advanced, would perform compared to a validated nutrition analysis app used in research (MetaDieta, Italy). The outcome of the study was to see how close AI carb estimates were to the MetaDieta carb counts.
Researchers asked each AI to analyse the carbohydrate content of meals under 3 different conditions to simulate different real-world eating scenarios that give varying amounts of information about the meal.
- Minimal data: just the name (e.g., caesar salad) and picture of the meal. Meant to simulate eating in a social setting.
- Moderate data: name, picture, and limited ingredient list of the meal. Meant to simulate restaurant dining.
- Full data: name, picture, and full ingredient list with weights. Meant to simulate cooking at home.
Half of all meals were prepackaged food (industrially made) and half were non-prepackaged.
What does the study show: ChatGPT vs. Gemini
ChatGPT more accurate with moderate data
Overall, ChatGPT carb estimates were closer to the reference software when given the moderate data, with 18% error compared to Gemini’s 29%. With minimal data, both AI models struggled, showing error rates between 35 and 45%. Predictably, when provided with full data, both systems performed much better, reaching around 13% error.
Prepackaged food improved accuracy
When comparing ChatGPT’s carb estimates for prepackaged versus non-prepackaged foods (without comparing to Gemini), prepackaged foods significantly improved accuracy when moderate or full data were provided. In fact, prepackaged meals with moderate data had the lowest error rate of all scenarios at just under 8%, compared to 28% for non-prepackaged meals at the same data level. When given minimal data, however, ChatGPT’s estimates varied widely, with error rates ranging from 25 to 45%, regardless of whether the meal was prepackaged or not.
For Gemini, both prepackaged and non-prepackaged meals had very poor carb estimations when only minimal data were provided, with error between 40 and 50%. With moderate data, Gemini performed slightly better on prepackaged meals (23% error) compared to non-prepackaged ones (34%). With full data, the difference became more pronounced: Gemini’s estimates were significantly more accurate for prepackaged meals (9% error) than non-prepackaged meals (17%).
- More information means more accurate estimates
This study provides a look into the usefulness of current AIs for estimating carb count and, as can be expected, shows that more information provided results in more accurate counts.
How does this compare to estimates of people with T1D?
Previous studies in 2013 and 2016 reported that carb count error rates among people with T1D without the use of apps were on average 20-25% for home and hospital made meals, respectively. For prepackaged meals with the full information, ChatGPT is on average very accurate, within 10% error of calculated carbs. However, this study only measured the absolute percentage error for carb count estimates which does not tell us if the AIs were over- or under-estimating.
What’s next?
Future research should test if the estimates are over- or under-estimating or a mix of both and directly compare the error rates to current methods used by people with T1D, including smartphone apps for carb counting. Someday AIs may be able to accurately assess carbs in those difficult to count meals, but for now, more data is needed.
Want to get involved in research?
If you live with T1D and are looking to participate in research, consider joining the BETTER registry today!
Reference:
Tecce N, Vetrani C, Pelosi AL, Alfiore M, Mayol D, Maddaloni MG, Amodio M, Colao A. AI-Powered Carbohydrate Counting for Type 1 Diabetes: Accuracy and Real-World Performance. Diabetes Care. 2025 Aug 1;48(8):e97-e98. doi: 10.2337/dc25-0303. PMID: 40397829.
Written by: Cassandra Locatelli B.Sc.
Reviewed by:
- Sarah Haag, Clinical Nurse, B.Sc.
- Anne-Sophie Brazeau, RD, PhD
- Kaitlin McBride, Darrin Davis, patient partner
Sign up for our newsletter
Subscribe to stay informed about type 1 diabetes.
Participate in the BETTER registry!
First registry of people living with T1D in Canada.
Learn More





