🤖 AI Summary
Online continual learning (onCL) faces challenges including single-pass data access, unknown task boundaries, catastrophic forgetting in pre-trained models (PTMs), and sensitivity to learning rate selection. To address these, we propose a novel onCL paradigm that operates without task identifiers or hyperparameter tuning. First, we introduce Online Prototypes (OPs)—a memory-efficient mechanism that dynamically maintains class-level representations in the PTM’s feature space, enabling rehearsal without explicit storage. Second, we design Class-wise Hypergradients (CWH), which adaptively calibrates gradient update steps per class, eliminating reliance on manual learning rate optimization. Our method is the first to enable end-to-end, PTM-driven onCL. Extensive experiments demonstrate substantial improvements in both accuracy and stability across multiple benchmarks, while maintaining full compatibility with mainstream continual learning frameworks.
📝 Abstract
Continual Learning (CL) addresses the problem of learning from a data sequence where the distribution changes over time. Recently, efficient solutions leveraging Pre-Trained Models (PTM) have been widely explored in the offline CL (offCL) scenario, where the data corresponding to each incremental task is known beforehand and can be seen multiple times. However, such solutions often rely on 1) prior knowledge regarding task changes and 2) hyper-parameter search, particularly regarding the learning rate. Both assumptions remain unavailable in online CL (onCL) scenarios, where incoming data distribution is unknown and the model can observe each datum only once. Therefore, existing offCL strategies fall largely behind performance-wise in onCL, with some proving difficult or impossible to adapt to the online scenario. In this paper, we tackle both problems by leveraging Online Prototypes (OP) and Class-Wise Hypergradients (CWH). OP leverages stable output representations of PTM by updating its value on the fly to act as replay samples without requiring task boundaries or storing past data. CWH learns class-dependent gradient coefficients during training to improve over sub-optimal learning rates. We show through experiments that both introduced strategies allow for a consistent gain in accuracy when integrated with existing approaches. We will make the code fully available upon acceptance.