NutriBench: A Dataset for Evaluating Large Language Models on Nutrition Estimation from Meal Descriptions

📅 2024-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models (LLMs) for estimating macronutrients (carbohydrates, protein, fat, calories) from natural-language food descriptions lack reliable, standardized, and globally representative benchmarks. Method: We introduce NutriBench—the first open-source, human-validated, globally diverse benchmark for nutritional estimation, comprising 11,857 real-world meal descriptions paired with ground-truth nutrition labels. We evaluate 12 state-of-the-art LLMs (e.g., GPT-4o, Llama-3.1, Qwen2) across multiple prompting paradigms: standard prompting, chain-of-thought, and retrieval-augmented generation (RAG). Contribution/Results: Our evaluation reveals that several LLMs achieve dietitian-level accuracy in carbohydrate estimation (mean absolute error <12 g), with inference speedups up to 100× over traditional methods. Furthermore, we conduct clinical-grade simulation of diabetes-related glycemic risk prediction, demonstrating the models’ potential for evidence-informed decision support. NutriBench is publicly released and has been widely adopted by the research community.

Technology Category

Application Category

📝 Abstract
Accurate nutrition estimation helps people make informed dietary choices and is essential in the prevention of serious health complications. We present NutriBench, the first publicly available natural language meal description nutrition benchmark. NutriBench consists of 11,857 meal descriptions generated from real-world global dietary intake data. The data is human-verified and annotated with macro-nutrient labels, including carbohydrates, proteins, fats, and calories. We conduct an extensive evaluation of NutriBench on the task of carbohydrate estimation, testing twelve leading Large Language Models (LLMs), including GPT-4o, Llama3.1, Qwen2, Gemma2, and OpenBioLLM models, using standard, Chain-of-Thought and Retrieval-Augmented Generation strategies. Additionally, we present a study involving professional nutritionists, finding that LLMs can provide comparable but significantly faster estimates. Finally, we perform a real-world risk assessment by simulating the effect of carbohydrate predictions on the blood glucose levels of individuals with diabetes. Our work highlights the opportunities and challenges of using LLMs for nutrition estimation, demonstrating their potential to aid professionals and laypersons and improve health outcomes. Our benchmark is publicly available at: https://mehak126.github.io/nutribench.html
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on nutrition estimation from meal descriptions
Assessing accuracy of carbohydrate estimation using various LLM strategies
Simulating real-world impact of LLM predictions on diabetes management
Innovation

Methods, ideas, or system contributions that make the work stand out.

First public natural language meal nutrition benchmark
Evaluated 12 LLMs with diverse prompting strategies
Simulated diabetes risk via blood glucose predictions
🔎 Similar Papers
No similar papers found.