Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical understanding in existing continual learning methods regarding how the magnitude of parameter updates influences forgetting and generalization. Formalizing forgetting as knowledge degradation caused by task-specific parameter drift, the study introduces an optimization framework that constrains parameter updates. It innovatively unifies frozen training and initialization-based training within a single theoretical framework and proposes a hybrid strategy that adaptively adjusts update magnitudes based on gradient directions. Through parameter space analysis and constrained optimization, the method significantly reduces catastrophic forgetting while enhancing generalization performance in deep neural networks, outperforming standard training approaches.

Technology Category

Application Category

📝 Abstract
The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.
Problem

Research questions and friction points this paper is trying to address.

continual learning
parameter update magnitude
forgetting
generalization
knowledge degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

parameter update magnitude
knowledge degradation
frozen training
initialized training
continual learning
🔎 Similar Papers
No similar papers found.
J
JinLi He
Institute of Intelligent Information Processing, Shanxi University, Taiyuan, 030006, China
L
Liang Bai
Institute of Intelligent Information Processing, Shanxi University, Taiyuan, 030006, China
Xian Yang
Xian Yang
University of Manchester
Artificial IntelligenceMachine LearningHealthcare AINatural Language Processing