🤖 AI Summary
This work addresses the challenge of accurately estimating food types and portion sizes from a single pre-meal image, a task hindered by reliance on depth sensors, multi-view imagery, or explicit segmentation in existing methods. The authors propose a novel framework leveraging vision-language models that operates solely on paired pre- and post-meal RGB images, eliminating the need for segmentation masks or specialized hardware. By employing natural language prompts to localize food items and estimate their weights, and integrating a two-stage training strategy to compute intake, the method enables fine-grained dietary assessment. It represents the first application of vision-language models to paired mealtime images for food-level nutritional analysis and establishes a new state-of-the-art across three public benchmarks, setting a practical baseline for real-world dietary monitoring.
📝 Abstract
Accurate dietary assessment is critical for precision nutrition, yet most image-based methods rely on a single pre-consumption image and provide only coarse, meal-level estimates. These approaches cannot determine what was actually consumed and often require restrictive inputs such as depth sensing, multi-view imagery, or explicit segmentation. In this paper, we propose a simple vision-language framework for food-item-level nutritional analysis using paired before-and-after eating images. Instead of relying on rigid segmentation masks, our method leverages natural language prompts to localize specific food items and estimate their weight directly from a single RGB image. We further estimate food consumption by predicting weight differences between paired images using a two-stage training strategy. We evaluate our method on three publicly available datasets and demonstrate consistent improvements over existing approaches, establishing a strong baseline for before-and-after dietary image analysis.