🤖 AI Summary
To address severe catastrophic forgetting and storage constraints in non-exemplar class-incremental learning (NECIL)—where no historical samples are retained for replay—this paper proposes a continual learning framework based on dynamically integrated, lightweight task-specific adapters. Methodologically, it introduces: (1) a novel patch-level adapter dynamic integration mechanism to enable efficient cross-task parameter reuse; (2) patch-level knowledge distillation (PDL) and patch-level feature reconstruction (PFR), jointly preserving decision boundary consistency and representation stability without access to original training data; and (3) a modular, lightweight architecture that substantially reduces computational overhead. Evaluated on standard NECIL benchmarks, the method achieves state-of-the-art accuracy with minimal memory footprint, demonstrating strong generalization capability and high computational efficiency.
📝 Abstract
Non-exemplar class Incremental Learning (NECIL) enables models to continuously acquire new classes without retraining from scratch and storing old task exemplars, addressing privacy and storage issues. However, the absence of data from earlier tasks exacerbates the challenge of catastrophic forgetting in NECIL. In this paper, we propose a novel framework called Dynamic Integration of task-specific Adapters (DIA), which comprises two key components: Task-Specific Adapter Integration (TSAI) and Patch-Level Model Alignment. TSAI boosts compositionality through a patch-level adapter integration strategy, which provides a more flexible compositional solution while maintaining low computation costs. Patch-Level Model Alignment maintains feature consistency and accurate decision boundaries via two specialized mechanisms: Patch-Level Distillation Loss (PDL) and Patch-Level Feature Reconstruction method (PFR). Specifically, the PDL preserves feature-level consistency between successive models by implementing a distillation loss based on the contributions of patch tokens to new class learning. The PFR facilitates accurate classifier alignment by reconstructing old class features from previous tasks that adapt to new task knowledge. Extensive experiments validate the effectiveness of our DIA, revealing significant improvements on benchmark datasets in the NECIL setting, maintaining an optimal balance between computational complexity and accuracy.