π€ AI Summary
To address the challenges of stability-plasticity imbalance, cross-task knowledge interference, and weak semantic representation in visual classifiers within continual learning (CL), this paper proposes SECAβa Semantic-Enhanced Continual Learning framework. SECA is the first to incorporate CLIPβs textual semantic priors into continual learning: it employs Semantic-Guided Adaptive Knowledge Distillation (SG-AKT) to constrain backbone updates and leverages textual semantic relationships to refine visual prototypes (SE-VPR), enabling semantic-aware knowledge transfer and classifier structural enhancement. Additionally, instance-level knowledge aggregation and text-vision alignment mechanisms are integrated to improve representation consistency. Evaluated on multiple standard continual learning benchmarks, SECA significantly mitigates catastrophic forgetting and enhances adaptation to novel tasks. Results empirically validate the critical role of textual priors in facilitating robust knowledge transfer and optimizing classifier semantics.
π Abstract
Continual learning (CL) aims to equip models with the ability to learn from a stream of tasks without forgetting previous knowledge. With the progress of vision-language models like Contrastive Language-Image Pre-training (CLIP), their promise for CL has attracted increasing attention due to their strong generalizability. However, the potential of rich textual semantic priors in CLIP in addressing the stability-plasticity dilemma remains underexplored. During backbone training, most approaches transfer past knowledge without considering semantic relevance, leading to interference from unrelated tasks that disrupt the balance between stability and plasticity. Besides, while text-based classifiers provide strong generalization, they suffer from limited plasticity due to the inherent modality gap in CLIP. Visual classifiers help bridge this gap, but their prototypes lack rich and precise semantics. To address these challenges, we propose Semantic-Enriched Continual Adaptation (SECA), a unified framework that harnesses the anti-forgetting and structured nature of textual priors to guide semantic-aware knowledge transfer in the backbone and reinforce the semantic structure of the visual classifier. Specifically, a Semantic-Guided Adaptive Knowledge Transfer (SG-AKT) module is proposed to assess new images' relevance to diverse historical visual knowledge via textual cues, and aggregate relevant knowledge in an instance-adaptive manner as distillation signals. Moreover, a Semantic-Enhanced Visual Prototype Refinement (SE-VPR) module is introduced to refine visual prototypes using inter-class semantic relations captured in class-wise textual embeddings. Extensive experiments on multiple benchmarks validate the effectiveness of our approach.