🤖 AI Summary
Continual learning (CL) confronts dual challenges of catastrophic forgetting (CF) and excessive memory overhead; existing approaches rely on task replay or large external buffers, limiting scalability. This paper proposes FOL (Forget-Free Online Learning), the first purely memory-free CL framework that eliminates all external storage. FOL jointly optimizes model stability and plasticity through sequential task training coupled with cross-task knowledge distillation. We introduce a novel quantitative metric to explicitly balance plasticity and stability, and establish an end-to-end training paradigm devoid of both replay and buffering. Evaluated on three standard benchmarks—Split-CIFAR100, Split-ImageNet, and CORe50—FOL achieves state-of-the-art accuracy while reducing memory footprint to merely 6.8% of typical buffer-based methods. This breakthrough significantly enhances computational efficiency and practical deployability for resource-constrained continual learning scenarios.
📝 Abstract
Continual Learning (CL) remains a central challenge in deep learning, where models must sequentially acquire new knowledge while mitigating Catastrophic Forgetting (CF) of prior tasks. Existing approaches often struggle with efficiency and scalability, requiring extensive memory or model buffers. This work introduces ``No Forgetting Learning"(NFL), a memory-free CL framework that leverages knowledge distillation to maintain stability while preserving plasticity. Memory-free means the NFL does not rely on any memory buffer. Through extensive evaluations of three benchmark datasets, we demonstrate that NFL achieves competitive performance while utilizing approximately 14.75 times less memory than state-of-the-art methods. Furthermore, we introduce a new metric to better assess CL's plasticity-stability trade-off.