🤖 AI Summary
This work addresses catastrophic forgetting in few-shot class-incremental learning by proposing the CD² framework, which leverages a classifier-guided dataset distillation mechanism to synthesize highly condensed, representative samples. To preserve the feature distribution of previously learned classes, CD² incorporates a distribution-constrained loss that effectively maintains historical knowledge without substantially increasing storage overhead. By enabling efficient reuse of distilled exemplars, the method significantly mitigates forgetting while maintaining model plasticity for new tasks. Extensive experiments on three benchmark datasets demonstrate that CD² outperforms state-of-the-art approaches, achieving notable improvements in few-shot class-incremental learning performance.
📝 Abstract
Few-shot class-incremental learning (FSCIL) receives significant attention from the public to perform classification continuously with a few training samples, which suffers from the key catastrophic forgetting problem. Existing methods usually employ an external memory to store previous knowledge and treat it with incremental classes equally, which cannot properly preserve previous essential knowledge. To solve this problem and inspired by recent distillation works on knowledge transfer, we propose a framework termed Constrained Dataset Distillation (CD^2) to facilitate FSCIL, which includes a dataset distillation module (DDM) and a distillation constraint module (DCM). Specifically, the DDM synthesizes highly condensed samples guided by the classifier, forcing the model to learn compacted essential class-related clues from a few incremental samples. The DCM introduces a designed loss to constrain the previously learned class distribution, which can preserve distilled knowledge more sufficiently. Extensive experiments on three public datasets show the superiority of our method against other state-of-the-art competitors.