Semantic-Guided Dynamic Sparsification for Pre-Trained Model-based Class-Incremental Learning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of pretrained models in class-incremental learning, where rigid parameter orthogonality constraints restrict plasticity and exacerbate interference between old and new knowledge. To overcome this, the authors propose a semantic-guided dynamic sparsification approach that abandons fixed constraints in parameter space and instead dynamically allocates subspaces in the activation space based on semantic similarity among classes. Specifically, semantically similar classes share compact subspaces to facilitate knowledge transfer, while dissimilar classes are assigned non-overlapping subspaces to mitigate interference. The method further enhances representation stability through activation alignment and rank control. Evaluated on multiple benchmark datasets, the proposed approach significantly alleviates catastrophic forgetting, improves model adaptability, and achieves state-of-the-art performance in class-incremental learning.

Technology Category

Application Category

📝 Abstract
Class-Incremental Learning (CIL) requires a model to continually learn new classes without forgetting old ones. A common and efficient solution freezes a pre-trained model and employs lightweight adapters, whose parameters are often forced to be orthogonal to prevent inter-task interference. However, we argue that this parameter-constraining method is detrimental to plasticity. To this end, we propose Semantic-Guided Dynamic Sparsification (SGDS), a novel method that proactively guides the activation space by governing the orientation and rank of its subspaces through targeted sparsification. Specifically, SGDS promotes knowledge transfer by encouraging similar classes to share a compact activation subspace, while simultaneously preventing interference by assigning non-overlapping activation subspaces to dissimilar classes. By sculpting class-specific sparse subspaces in the activation space, SGDS effectively mitigates interference without imposing rigid constraints on the parameter space. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SGDS.
Problem

Research questions and friction points this paper is trying to address.

Class-Incremental Learning
Pre-Trained Models
Catastrophic Forgetting
Parameter Plasticity
Activation Space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class-Incremental Learning
Pre-trained Models
Activation Sparsification
Semantic Guidance
Dynamic Subspace
🔎 Similar Papers
No similar papers found.
Ruiqi Liu
Ruiqi Liu
Texas Tech University
nonparametric methodsmachine learningeconometrics
B
Boyu Diao
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Z
Zijia An
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
R
Runjie Shao
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Zhulin An
Zhulin An
Institute Of Computing Technology Chinese Academy Of Sciences
Automatic Deep LearningLifelong Learning
F
Fei Wang
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Y
Yongjun Xu
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China