Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual learning faces dual challenges of catastrophic forgetting and parameter inefficiency. To address these, we propose Perturb-and-Merge (P&M), a two-stage framework: during training, we introduce zero-overhead random-direction perturbation regularization based on Hessian approximation to enhance parameter robustness; during inference, we first fuse historical and newly trained models via convex combination, with theoretically derived optimal merging coefficients. Integrated with LoRA for efficient fine-tuning, P&M requires no storage of historical data or gradients. Evaluated on multiple standard continual learning benchmarks, it achieves state-of-the-art performance, substantially mitigates forgetting, and incurs minimal memory overhead. Our work establishes both theoretical foundations and practical feasibility for model merging in continual learning.

Technology Category

Application Category

📝 Abstract
Continual Learning (CL) aims to enable models to continuously acquire new knowledge from a sequence of tasks with avoiding the forgetting of learned information. However, existing CL methods only rely on the parameters of the most recent task for inference, which makes them susceptible to catastrophic forgetting. Inspired by the recent success of model merging techniques, we propose extbf{Perturb-and-Merge (P&M)}, a novel continual learning framework that integrates model merging into the CL paradigm to mitigate forgetting. Specifically, after training on each task, P&M constructs a new model by forming a convex combination of the previous model and the newly trained task-specific model. Through theoretical analysis, we minimize the total loss increase across all tasks and derive an analytical solution for the optimal merging coefficient. To further improve the performance of the merged model, we observe that the degradation introduced during merging can be alleviated by a regularization term composed of the task vector and the Hessian matrix of the loss function. Interestingly, we show that this term can be efficiently approximated using second-order symmetric finite differences, and a stochastic perturbation strategy along the task vector direction is accordingly devised which incurs no additional forward or backward passes while providing an effective approximation of the regularization term. Finally, we combine P&M with LoRA, a parameter-efficient fine-tuning method, to reduce memory overhead. Our proposed approach achieves state-of-the-art performance on several continual learning benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Mitigates catastrophic forgetting in continual learning via model merging
Optimizes merging coefficients to minimize total loss across tasks
Enhances merged model performance with efficient Hessian-based regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convex combination merging for model integration
Second-order finite differences for regularization
LoRA integration for memory efficiency
🔎 Similar Papers
No similar papers found.
H
Haomiao Qiu
Harbin Institute of Technology (Shenzhen)
M
Miao Zhang
Harbin Institute of Technology (Shenzhen)
Ziyue Qiao
Ziyue Qiao
Assistant Professor, Great Bay University
Data MiningGraph Machine LearningKnowledge GraphAI for Science
L
Liqiang Nie
Harbin Institute of Technology (Shenzhen)