🤖 AI Summary
To mitigate catastrophic forgetting in pretrained models during continual learning, this paper proposes Sparse Orthogonal Tuning (SoTU), a parameter-efficient adaptation method. SoTU freezes the backbone network and introduces lightweight, sparse, and orthogonally constrained delta parameters—updated incrementally per task. These parameters are fused across tasks via orthogonal projection and sparsity-aware optimization. Crucially, SoTU is the first approach to jointly integrate sparsity and orthogonality into the parameter update mechanism of continual learning, replacing conventional adapter- or prompt-based fine-tuning. Evaluated on multiple standard continual learning benchmarks, SoTU achieves state-of-the-art feature representation performance in a plug-and-play, retraining-free manner—requiring no task-specific classifier design. It significantly outperforms leading adapter and prompting methods while offering superior generalization and deployment efficiency.
📝 Abstract
Continual learning methods based on pre-trained models (PTM) have recently gained attention which adapt to successive downstream tasks without catastrophic forgetting. These methods typically refrain from updating the pre-trained parameters and instead employ additional adapters, prompts, and classifiers. In this paper, we from a novel perspective investigate the benefit of sparse orthogonal parameters for continual learning. We found that merging sparse orthogonality of models learned from multiple streaming tasks has great potential in addressing catastrophic forgetting. Leveraging this insight, we propose a novel yet effective method called SoTU (Sparse Orthogonal Parameters TUning). We hypothesize that the effectiveness of SoTU lies in the transformation of knowledge learned from multiple domains into the fusion of orthogonal delta parameters. Experimental evaluations on diverse CL benchmarks demonstrate the effectiveness of the proposed approach. Notably, SoTU achieves optimal feature representation for streaming data without necessitating complex classifier designs, making it a Plug-and-Play solution.