Few-shot Class-Incremental Learning via Generative Co-Memory Regularization

📅 2026-01-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of overfitting on new classes and catastrophic forgetting of old classes in few-shot class-incremental learning. To mitigate these issues, the authors propose a generative collaborative memory regularization approach. During the base training phase, a memory bank is constructed by integrating a Masked Autoencoder (MAE) with generative domain-adaptive fine-tuning to store class-level feature prototypes and classifier weights. In the incremental phase, this memory bank is leveraged through a collaborative memory regularization mechanism that dynamically constrains classifier updates, thereby preserving prior knowledge while effectively adapting to new classes. Extensive experiments demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches across multiple standard benchmarks, achieving higher overall accuracy and effectively alleviating both catastrophic forgetting and overfitting.

Technology Category

Application Category

📝 Abstract
Few-shot class-incremental learning (FSCIL) aims to incrementally learn models from a small amount of novel data, which requires strong representation and adaptation ability of models learned under few-example supervision to avoid catastrophic forgetting on old classes and overfitting to novel classes. This work proposes a generative co-memory regularization approach to facilitate FSCIL. In the approach, the base learning leverages generative domain adaptation finetuning to finetune a pretrained generative encoder on a few examples of base classes by jointly incorporating a masked autoencoder (MAE) decoder for feature reconstruction and a fully-connected classifier for feature classification, which enables the model to efficiently capture general and adaptable representations. Using the finetuned encoder and learned classifier, we construct two class-wise memories: representation memory for storing the mean features for each class, and weight memory for storing the classifier weights. After that, the memory-regularized incremental learning is performed to train the classifier dynamically on the examples of few-shot classes in each incremental session by simultaneously optimizing feature classification and co-memory regularization. The memories are updated in a class-incremental manner and they collaboratively regularize the incremental learning. In this way, the learned models improve recognition accuracy, while mitigating catastrophic forgetting over old classes and overfitting to novel classes. Extensive experiments on popular benchmarks clearly demonstrate that our approach outperforms the state-of-the-arts.
Problem

Research questions and friction points this paper is trying to address.

Few-shot class-incremental learning
Catastrophic forgetting
Overfitting
Incremental learning
Small-sample learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-shot Class-Incremental Learning
Generative Co-Memory Regularization
Masked Autoencoder
Catastrophic Forgetting Mitigation
Memory-based Regularization
🔎 Similar Papers
No similar papers found.