🤖 AI Summary
This work addresses domain-incremental learning without data replay, tackling the core challenge of enabling visual models to continually adapt to sequential domain shifts while mitigating catastrophic forgetting. We propose a two-level concept prototype representation—comprising coarse-grained and fine-grained prototypes—alongside three novel components: a Concept Prototype Generator (CPG) for dynamic prototype construction, a Coarse-to-Fine Calibrator (C2F) for hierarchical alignment, and a Dual-Point Regression loss (DDR) for stable knowledge distillation. Our approach preserves historical knowledge and integrates new domain information efficiently—without storing past-domain samples or requiring full model retraining. Evaluated on DomainNet, CDDB, and CORe50 benchmarks, it consistently outperforms state-of-the-art replay-free methods, achieving average accuracy gains of 3.2%–5.7%. The framework significantly alleviates forgetting and enhances cross-domain generalization capability.
📝 Abstract
Domain-Incremental Learning (DIL) enables vision models to adapt to changing conditions in real-world environments while maintaining the knowledge acquired from previous domains. Given privacy concerns and training time, Rehearsal-Free DIL (RFDIL) is more practical. Inspired by the incremental cognitive process of the human brain, we design Dual-level Concept Prototypes (DualCP) for each class to address the conflict between learning new knowledge and retaining old knowledge in RFDIL. To construct DualCP, we propose a Concept Prototype Generator (CPG) that generates both coarse-grained and fine-grained prototypes for each class. Additionally, we introduce a Coarse-to-Fine calibrator (C2F) to align image features with DualCP. Finally, we propose a Dual Dot-Regression (DDR) loss function to optimize our C2F module. Extensive experiments on the DomainNet, CDDB, and CORe50 datasets demonstrate the effectiveness of our method.