Flashbacks to Harmonize Stability and Plasticity in Continual Learning

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Balancing model stability and plasticity remains a fundamental challenge in continual learning. This paper proposes Flashback Learning (FL), the first bidirectional regularization framework that decouples and jointly optimizes these competing objectives. FL introduces two complementary knowledge repositories— an *old knowledge base* to enhance stability and a *new knowledge base* to boost plasticity—driving a two-stage training process. It supports plug-and-play integration with mainstream approaches, including experience replay, knowledge distillation, parameter regularization, and dynamic architectures. Theoretical analysis elucidates FL’s mechanism for achieving stability–plasticity equilibrium. Empirically, FL achieves average accuracy improvements of +4.91% under class-incremental and +3.51% under task-incremental settings, significantly enhancing the stability–plasticity trade-off ratio. Moreover, it outperforms state-of-the-art methods on challenging benchmarks such as ImageNet.

Technology Category

Application Category

📝 Abstract
We introduce Flashback Learning (FL), a novel method designed to harmonize the stability and plasticity of models in Continual Learning (CL). Unlike prior approaches that primarily focus on regularizing model updates to preserve old information while learning new concepts, FL explicitly balances this trade-off through a bidirectional form of regularization. This approach effectively guides the model to swiftly incorporate new knowledge while actively retaining its old knowledge. FL operates through a two-phase training process and can be seamlessly integrated into various CL methods, including replay, parameter regularization, distillation, and dynamic architecture techniques. In designing FL, we use two distinct knowledge bases: one to enhance plasticity and another to improve stability. FL ensures a more balanced model by utilizing both knowledge bases to regularize model updates. Theoretically, we analyze how the FL mechanism enhances the stability-plasticity balance. Empirically, FL demonstrates tangible improvements over baseline methods within the same training budget. By integrating FL into at least one representative baseline from each CL category, we observed an average accuracy improvement of up to 4.91% in Class-Incremental and 3.51% in Task-Incremental settings on standard image classification benchmarks. Additionally, measurements of the stability-to-plasticity ratio confirm that FL effectively enhances this balance. FL also outperforms state-of-the-art CL methods on more challenging datasets like ImageNet.
Problem

Research questions and friction points this paper is trying to address.

Balances stability and plasticity in continual learning
Introduces bidirectional regularization for knowledge retention
Improves accuracy in class and task incremental settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional regularization balances stability-plasticity trade-off
Two-phase training integrates with various CL methods
Dual knowledge bases enhance model updates
🔎 Similar Papers
No similar papers found.