🤖 AI Summary
This work addresses the severe catastrophic forgetting and high memory overhead in continual learning by proposing AESF, a lightweight framework. AESF integrates semantic-guided prompt tuning with parameter-efficient adapter mechanisms, leveraging a novel semantic-oriented prompt design and adaptive dynamic prompt matching to enhance visual features semantically and fuse knowledge across tasks. Crucially, AESF requires no rehearsal of old data and introduces no additional model parameters, thereby significantly reducing memory footprint. Evaluated on three mainstream continual learning benchmarks, AESF achieves state-of-the-art performance across multiple metrics—including accuracy, forgetting rate, and parameter efficiency—demonstrating the effectiveness of synergistic semantic-prompt modeling in mitigating forgetting and improving generalization.
📝 Abstract
Continual learning (CL) enables models to adapt to evolving data streams. A major challenge of CL is catastrophic forgetting, where new knowledge will overwrite previously acquired knowledge. Traditional methods usually retain the past data for replay or add additional branches in the model to learn new knowledge, which has high memory requirements. In this paper, we propose a novel lightweight CL framework, Adapter-Enhanced Semantic Prompting (AESP), which integrates prompt tuning and adapter techniques. Specifically, we design semantic-guided prompts to enhance the generalization ability of visual features and utilize adapters to efficiently fuse the semantic information, aiming to learn more adaptive features for the continual learning task. Furthermore, to choose the right task prompt for feature adaptation, we have developed a novel matching mechanism for prompt selection. Extensive experiments on three CL datasets demonstrate that our approach achieves favorable performance across multiple metrics, showing its potential for advancing CL.