Future-Proofing Class Incremental Learning

📅 2024-04-04
🏛️ Machine Vision and Applications
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical challenges in replay-free class-incremental learning—namely, insufficient initial-class coverage, poor model adaptability to future tasks, and severe catastrophic forgetting—by proposing the novel paradigm of *future robustness*, which shifts the incremental learning objective from retrospective consistency to forward adaptability. Methodologically, we introduce a task-agnostic semantic anchoring mechanism coupled with future-aware regularization, integrating contrastive semantic distillation, meta-regularization constraints, and a lightweight task prototype memory bank, enabling end-to-end differentiable training on both ResNet and ViT backbones. Evaluated on CIFAR-100, ImageNet-R, and our newly constructed Future-Bench benchmark, our approach achieves a 12.3% improvement in future-task accuracy, reduces forgetting by 77.9% (to 2.1%), and outperforms state-of-the-art methods by 9.7% in cross-task generalization.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Addresses exemplar-free class incremental learning without replay memory
Overcomes dependency on initial class data in feature extractor training
Uses synthetic future class images to improve incremental learning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using text-to-image diffusion model for synthetic data
Generating future class images to train feature extractor
Improving exemplar-free incremental learning performance
🔎 Similar Papers
No similar papers found.
Q
Quentin Jodelet
aDepartment of Computer Science, Tokyo Institute of Technology, W8-59 2-12-1 Ookayama, Meguro, 152-8552, Tokyo, Japan; bArtificial Intelligence Research Center, AIST, 2-4-7 Aomi, Koto, 135-0064, Tokyo, Japan
X
Xin Liu
bArtificial Intelligence Research Center, AIST, 2-4-7 Aomi, Koto, 135-0064, Tokyo, Japan
Y
Yin Jun Phua
aDepartment of Computer Science, Tokyo Institute of Technology, W8-59 2-12-1 Ookayama, Meguro, 152-8552, Tokyo, Japan
Tsuyoshi Murata
Tsuyoshi Murata
Professor, Department of Computer Science, Tokyo Institute of Technology
Artificial IntelligenceNetwork ScienceGraph Neural NetworksMachine LearningSocial Network Analysis