Are Vision-Language Models Ready for Dietary Assessment? Exploring the Next Frontier in AI-Powered Food Image Recognition

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current automatic dietary assessment is hindered by accuracy bottlenecks in fine-grained food recognition—particularly distinguishing cooking methods and visually similar ingredients. To address this, we systematically evaluate six state-of-the-art vision-language models (VLMs) across food detection, classification, and nutritional inference tasks. We introduce FoodNExTDB, the first expert-annotated, multi-level food database comprising 9,263 images and 50,000 nutrition labels. Further, we propose Expert-Weighted Recall (EWR), a novel evaluation metric that accounts for inter-annotator variability. Experimental results show that proprietary VLMs achieve >90% EWR on single-food image recognition but exhibit substantial performance degradation on fine-grained discrimination. FoodNExTDB is publicly released, establishing a high-quality benchmark and reproducible evaluation framework for food-centric VLM research.

Technology Category

Application Category

📝 Abstract
Automatic dietary assessment based on food images remains a challenge, requiring precise food detection, segmentation, and classification. Vision-Language Models (VLMs) offer new possibilities by integrating visual and textual reasoning. In this study, we evaluate six state-of-the-art VLMs (ChatGPT, Gemini, Claude, Moondream, DeepSeek, and LLaVA), analyzing their capabilities in food recognition at different levels. For the experimental framework, we introduce the FoodNExTDB, a unique food image database that contains 9,263 expert-labeled images across 10 categories (e.g.,"protein source"), 62 subcategories (e.g.,"poultry"), and 9 cooking styles (e.g.,"grilled"). In total, FoodNExTDB includes 50k nutritional labels generated by seven experts who manually annotated all images in the database. Also, we propose a novel evaluation metric, Expert-Weighted Recall (EWR), that accounts for the inter-annotator variability. Results show that closed-source models outperform open-source ones, achieving over 90% EWR in recognizing food products in images containing a single product. Despite their potential, current VLMs face challenges in fine-grained food recognition, particularly in distinguishing subtle differences in cooking styles and visually similar food items, which limits their reliability for automatic dietary assessment. The FoodNExTDB database is publicly available at https://github.com/AI4Food/FoodNExtDB.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLMs for food image recognition in dietary assessment
Assessing VLMs' accuracy in fine-grained food classification
Addressing challenges in distinguishing cooking styles and similar foods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes Vision-Language Models for food recognition
Introduces FoodNExTDB with expert-labeled food images
Proposes Expert-Weighted Recall for evaluation metric
🔎 Similar Papers
No similar papers found.
Sergio Romero-Tapiador
Sergio Romero-Tapiador
Predoctoral Researcher, Universidad Autonóma de Madrid
Wearable DevicesFood ComputingBioinformaticsMachine LearningDeepFakes
R
Rubén Tolosana
Biometrics and Data Pattern Analytics Lab, Universidad Autonoma de Madrid, Madrid, Spain
B
Blanca Lacruz-Pleguezuelos
IMDEA Food, CEI UAM+CSIC, Madrid, Spain
L
Laura Judith Marcos Zambrano
IMDEA Food, CEI UAM+CSIC, Madrid, Spain
G
Guadalupe X.Baz'an
IMDEA Food, CEI UAM+CSIC, Madrid, Spain
I
Isabel Espinosa-Salinas
IMDEA Food, CEI UAM+CSIC, Madrid, Spain
J
Julian Fiérrez
Biometrics and Data Pattern Analytics Lab, Universidad Autonoma de Madrid, Madrid, Spain
J
J. Ortega-Garcia
Biometrics and Data Pattern Analytics Lab, Universidad Autonoma de Madrid, Madrid, Spain
E
Enrique Carrillo-de Santa Pau
IMDEA Food, CEI UAM+CSIC, Madrid, Spain
A
Aythami Morales
Biometrics and Data Pattern Analytics Lab, Universidad Autonoma de Madrid, Madrid, Spain