Finding Structure in Continual Learning

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the stability-plasticity dilemma in continual learning, where acquiring new knowledge often leads to catastrophic forgetting of previously learned information. The authors propose a novel approach by introducing the Douglas–Rachford splitting method to decouple the original optimization problem into two sub-objectives: one enhancing plasticity by facilitating learning on new tasks, and the other promoting stability by preserving knowledge from prior tasks. These sub-problems are solved iteratively via proximal operators that converge to a consensus solution. Notably, the method operates without external replay buffers or complex regularization schemes, thereby circumventing gradient conflicts commonly arising from multi-loss weighting strategies. By avoiding reliance on auxiliary modules, the proposed framework achieves substantial improvements in both model performance and training stability across sequential learning scenarios.

Technology Category

Application Category

📝 Abstract
Learning from a stream of tasks usually pits plasticity against stability: acquiring new knowledge often causes catastrophic forgetting of past information. Most methods address this by summing competing loss terms, creating gradient conflicts that are managed with complex and often inefficient strategies such as external memory replay or parameter regularization. We propose a reformulation of the continual learning objective using Douglas-Rachford Splitting (DRS). This reframes the learning process not as a direct trade-off, but as a negotiation between two decoupled objectives: one promoting plasticity for new tasks and the other enforcing stability of old knowledge. By iteratively finding a consensus through their proximal operators, DRS provides a more principled and stable learning dynamic. Our approach achieves an efficient balance between stability and plasticity without the need for auxiliary modules or complex add-ons, providing a simpler yet more powerful paradigm for continual learning systems.
Problem

Research questions and friction points this paper is trying to address.

continual learning
catastrophic forgetting
stability-plasticity dilemma
task stream
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
Douglas-Rachford Splitting
Plasticity-Stability Trade-off
Proximal Operators
Gradient Conflict
🔎 Similar Papers
No similar papers found.