🤖 AI Summary
This work addresses the lack of theoretical understanding in existing continual learning methods regarding how the magnitude of parameter updates influences forgetting and generalization. Formalizing forgetting as knowledge degradation caused by task-specific parameter drift, the study introduces an optimization framework that constrains parameter updates. It innovatively unifies frozen training and initialization-based training within a single theoretical framework and proposes a hybrid strategy that adaptively adjusts update magnitudes based on gradient directions. Through parameter space analysis and constrained optimization, the method significantly reduces catastrophic forgetting while enhancing generalization performance in deep neural networks, outperforming standard training approaches.
📝 Abstract
The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.