🤖 AI Summary
This work addresses catastrophic forgetting and model drift in continual learning, which arise when optimal solutions across tasks exhibit weak overlap. To mitigate these issues, the authors propose a trust-region constrained approach that integrates generative replay with Fisher information matrix weighting. By employing a local quadratic approximation, the method implicitly constructs a meta-learning mechanism, enabling rapid reconvergence to previous task optima without explicit bi-level optimization. Theoretical analysis reveals that this trust-region formulation inherently exhibits MAML-style meta-learning properties. Empirical evaluations demonstrate that the proposed method significantly outperforms existing approaches in both diffusion-based image generation and diffusion policy control tasks, achieving state-of-the-art results in final performance, memory retention, and speed of early-task recovery.
📝 Abstract
Continual learning aims to acquire tasks sequentially without catastrophic forgetting, yet standard strategies face a core tradeoff: regularization-based methods (e.g., EWC) can overconstrain updates when task optima are weakly overlapping, while replay-based methods can retain performance but drift due to imperfect replay. We study a hybrid perspective: \emph{trust region continual learning} that combines generative replay with a Fisher-metric trust region constraint. We show that, under local approximations, the resulting update admits a MAML-style interpretation with a single implicit inner step: replay supplies an old-task gradient signal (query-like), while the Fisher-weighted penalty provides an efficient offline curvature shaping (support-like). This yields an emergent meta-learning property in continual learning: the model becomes an initialization that rapidly \emph{re-converges} to prior task optima after each task transition, without explicitly optimizing a bilevel objective. Empirically, on task-incremental diffusion image generation and continual diffusion-policy control, trust region continual learning achieves the best final performance and retention, and consistently recovers early-task performance faster than EWC, replay, and continual meta-learning baselines.