CaMiT: A Time-Aware Car Model Dataset for Classification and Generation

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of adapting visual systems to temporal evolution of objects in dynamic environments (e.g., car models emerging, evolving, and disappearing over time). We propose a novel paradigm—temporal incremental fine-grained learning. To support this, we introduce CaMiT, the first large-scale temporal fine-grained car dataset, comprising 787K labeled and 5.1M unlabeled samples, enabling both classification and generation tasks. Methodologically, we design a time-aware classification framework integrating supervised and self-supervised learning with temporal metadata; additionally, we propose a conditional time-aware image generation mechanism to enhance cross-temporal generation fidelity. Experiments demonstrate that our approach achieves performance on par with generalist models under resource constraints, significantly improves robustness in cross-temporal classification, and enables high-fidelity, temporally controllable image generation. This work establishes the first benchmark and methodological framework for fine-grained vision under temporal evolution.

Technology Category

Application Category

📝 Abstract
AI systems must adapt to evolving visual environments, especially in domains where object appearances change over time. We introduce Car Models in Time (CaMiT), a fine-grained dataset capturing the temporal evolution of car models, a representative class of technological artifacts. CaMiT includes 787K labeled samples of 190 car models (2007-2023) and 5.1M unlabeled samples (2005-2023), supporting both supervised and self-supervised learning. Static pretraining on in-domain data achieves competitive performance with large-scale generalist models while being more resource-efficient, yet accuracy declines when models are tested across years. To address this, we propose a time-incremental classification setting, a realistic continual learning scenario with emerging, evolving, and disappearing classes. We evaluate two strategies: time-incremental pretraining, which updates the backbone, and time-incremental classifier learning, which updates only the final layer, both improving temporal robustness. Finally, we explore time-aware image generation that leverages temporal metadata during training, yielding more realistic outputs. CaMiT offers a rich benchmark for studying temporal adaptation in fine-grained visual recognition and generation.
Problem

Research questions and friction points this paper is trying to address.

Addressing AI adaptation to evolving car model appearances over time
Developing time-incremental learning for temporal robustness in classification
Exploring time-aware image generation using temporal metadata
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-incremental classification setting for continual learning
Time-incremental pretraining strategy updating backbone model
Time-aware image generation leveraging temporal metadata
🔎 Similar Papers
No similar papers found.
F
Frédéric LIN
Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France
B
Biruk Abere Ambaw
Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France
Adrian Popescu
Adrian Popescu
CEA LIST, France
incremental learningsemi-supervised learningprivacymultimedia information retrieval
H
Hejer Ammar
Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France
R
Romaric Audigier
Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France
Hervé Le Borgne
Hervé Le Borgne
CEA List, France
multimedia content analysismultimedia information retrievalzero shot learningcomputer vision