CD^2: Constrained Dataset Distillation for Few-Shot Class-Incremental Learning

📅 2025-09-01
🏛️ International Joint Conference on Artificial Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting in few-shot class-incremental learning by proposing the CD² framework, which leverages a classifier-guided dataset distillation mechanism to synthesize highly condensed, representative samples. To preserve the feature distribution of previously learned classes, CD² incorporates a distribution-constrained loss that effectively maintains historical knowledge without substantially increasing storage overhead. By enabling efficient reuse of distilled exemplars, the method significantly mitigates forgetting while maintaining model plasticity for new tasks. Extensive experiments on three benchmark datasets demonstrate that CD² outperforms state-of-the-art approaches, achieving notable improvements in few-shot class-incremental learning performance.

Technology Category

Application Category

📝 Abstract
Few-shot class-incremental learning (FSCIL) receives significant attention from the public to perform classification continuously with a few training samples, which suffers from the key catastrophic forgetting problem. Existing methods usually employ an external memory to store previous knowledge and treat it with incremental classes equally, which cannot properly preserve previous essential knowledge. To solve this problem and inspired by recent distillation works on knowledge transfer, we propose a framework termed Constrained Dataset Distillation (CD^2) to facilitate FSCIL, which includes a dataset distillation module (DDM) and a distillation constraint module (DCM). Specifically, the DDM synthesizes highly condensed samples guided by the classifier, forcing the model to learn compacted essential class-related clues from a few incremental samples. The DCM introduces a designed loss to constrain the previously learned class distribution, which can preserve distilled knowledge more sufficiently. Extensive experiments on three public datasets show the superiority of our method against other state-of-the-art competitors.
Problem

Research questions and friction points this paper is trying to address.

few-shot class-incremental learning
catastrophic forgetting
knowledge preservation
dataset distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset Distillation
Class-Incremental Learning
Catastrophic Forgetting
Knowledge Preservation
Few-Shot Learning
🔎 Similar Papers
No similar papers found.
K
Kexin Bao
Institute of Information Engineering, Chinese Academy of Sciences
D
Daichi Zhang
Institute of Information Engineering, Chinese Academy of Sciences
H
Hansong Zhang
Institute of Information Engineering, Chinese Academy of Sciences
Yong Li
Yong Li
Institue of Software, Chinese Academy of Sciences
Automata theoryModel checking
Y
Yutao Yue
Hong Kong University of Science and Technology (Guangzhou)
Shiming Ge
Shiming Ge
Institute of Information Engineering, Chinese Academy of Sciences
Computer VisionArtificial Intelligence