Beyond Freezing: Sparse Tuning Enhances Plasticity in Continual Learning with Pre-Trained Models

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In continual learning of pretrained models, a fundamental trade-off exists between insufficient plasticity caused by frozen parameters and catastrophic forgetting induced by full fine-tuning. To address this, we propose Mutual Information-guided Sparse Tuning (MIST), the first method that jointly couples parameter update sensitivity analysis with mutual information maximization. MIST introduces a stochastic gradient dropping mechanism to enforce an ultra-sparse constraint—updating fewer than 0.5% of parameters per step—within a lightweight, plug-and-play architecture. Evaluated across multiple continual learning benchmarks, MIST achieves significant performance gains over frozen-backbone + adapter baselines while tuning less than 5% of parameters, demonstrating superior stability, generalization, and minimal interference with prior knowledge. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Continual Learning with Pre-trained Models holds great promise for efficient adaptation across sequential tasks. However, most existing approaches freeze PTMs and rely on auxiliary modules like prompts or adapters, limiting model plasticity and leading to suboptimal generalization when facing significant distribution shifts. While full fine-tuning can improve adaptability, it risks disrupting crucial pre-trained knowledge. In this paper, we propose Mutual Information-guided Sparse Tuning (MIST), a plug-and-play method that selectively updates a small subset of PTM parameters, less than 5%, based on sensitivity to mutual information objectives. MIST enables effective task-specific adaptation while preserving generalization. To further reduce interference, we introduce strong sparsity regularization by randomly dropping gradients during tuning, resulting in fewer than 0.5% of parameters being updated per step. Applied before standard freeze-based methods, MIST consistently boosts performance across diverse continual learning benchmarks. Experiments show that integrating our method into multiple baselines yields significant performance gains. Our code is available at https://github.com/zhwhu/MIST.
Problem

Research questions and friction points this paper is trying to address.

Enhances plasticity in continual learning with pre-trained models
Addresses suboptimal generalization due to distribution shifts
Preserves pre-trained knowledge while enabling task-specific adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selectively updates few PTM parameters
Uses mutual information for guidance
Applies strong sparsity regularization
H
Huan Zhang
School of Computer Science, Wuhan University
Fan Lyu
Fan Lyu
NLPR, CASIA
Computer VisionMachine LearningArtificial Intelligence
Shuyu Dong
Shuyu Dong
State Key Laboratory of Green Pesticide, Central China Normal University
S
Shenghua Fan
School of Computer Science, Wuhan University
Y
Yujin Zheng
School of Computer Science, Wuhan University
D
Dingwen Wang
School of Computer Science, Wuhan University