🤖 AI Summary
Existing food image datasets are predominantly web-scraped, exhibiting significant distributional shift from real-world user-captured meal photos. Method: We introduce FoodLog Athl, the first large-scale, real-world food image dataset derived from a dietary management mobile application, comprising 6,925 daily meal images across 218 fine-grained food categories, 14,349 bounding-box annotations, and rich structured metadata—including timestamps, user IDs, and multi-dish scene information. We propose a lightweight annotation paradigm initiated from user-uploaded images and formally define two novel tasks: incremental temporal fine-tuning and context-aware multi-dish classification. Contribution/Results: Leveraging on-device real-world images, we evaluate context modeling and incremental learning using large language models (LLMs) and multimodal foundation models. Experiments demonstrate substantial improvements in real-world generalization and contextual understanding across all three benchmark tasks, establishing new state-of-the-art performance. The dataset is publicly released.
📝 Abstract
Food image classification models are crucial for dietary management applications because they reduce the burden of manual meal logging. However, most publicly available datasets for training such models rely on web-crawled images, which often differ from users' real-world meal photos. In this work, we present FoodLogAthl-218, a food image dataset constructed from real-world meal records collected through the dietary management application FoodLog Athl. The dataset contains 6,925 images across 218 food categories, with a total of 14,349 bounding boxes. Rich metadata, including meal date and time, anonymized user IDs, and meal-level context, accompany each image. Unlike conventional datasets-where a predefined class set guides web-based image collection-our data begins with user-submitted photos, and labels are applied afterward. This yields greater intra-class diversity, a natural frequency distribution of meal types, and casual, unfiltered images intended for personal use rather than public sharing. In addition to (1) a standard classification benchmark, we introduce two FoodLog-specific tasks: (2) an incremental fine-tuning protocol that follows the temporal stream of users' logs, and (3) a context-aware classification task where each image contains multiple dishes, and the model must classify each dish by leveraging the overall meal context. We evaluate these tasks using large multimodal models (LMMs). The dataset is publicly available at https://huggingface.co/datasets/FoodLog/FoodLogAthl-218.