🤖 AI Summary
Traditional continual learning methods assume static task data distributions, rendering them ill-suited for concept drift—i.e., persistent distributional shifts—which inherently compromises the trade-off between stability and adaptability. To address this, we propose a concept-drift-aware continual learning framework centered on an Adaptive Memory Redirection (AMR) mechanism. AMR employs a lightweight, drift-aware resampling strategy to dynamically refresh the replay buffer: it discards obsolete samples and realigns the buffer with emerging data distributions, achieving near-from-scratch performance at negligible computational overhead. Crucially, the method operates without additional annotations, substantially reducing both computational and labeling costs. Evaluated on four newly constructed vision benchmarks explicitly designed for concept drift, our approach consistently outperforms existing baselines—achieving higher accuracy while reducing resource consumption by one to two orders of magnitude.
📝 Abstract
Traditional continual learning methods prioritize knowledge retention and focus primarily on mitigating catastrophic forgetting, implicitly assuming that the data distribution of previously learned tasks remains static. This overlooks the dynamic nature of real-world data streams, where concept drift permanently alters previously seen data and demands both stability and rapid adaptation.
We introduce a holistic framework for continual learning under concept drift that simulates realistic scenarios by evolving task distributions. As a baseline, we consider Full Relearning (FR), in which the model is retrained from scratch on newly labeled samples from the drifted distribution. While effective, this approach incurs substantial annotation and computational overhead. To address these limitations, we propose Adaptive Memory Realignment (AMR), a lightweight alternative that equips rehearsal-based learners with a drift-aware adaptation mechanism. AMR selectively removes outdated samples of drifted classes from the replay buffer and repopulates it with a small number of up-to-date instances, effectively realigning memory with the new distribution. This targeted resampling matches the performance of FR while reducing the need for labeled data and computation by orders of magnitude.
To enable reproducible evaluation, we introduce four concept-drift variants of standard vision benchmarks: Fashion-MNIST-CD, CIFAR10-CD, CIFAR100-CD, and Tiny-ImageNet-CD, where previously seen classes reappear with shifted representations. Comprehensive experiments on these datasets using several rehearsal-based baselines show that AMR consistently counters concept drift, maintaining high accuracy with minimal overhead. These results position AMR as a scalable solution that reconciles stability and plasticity in non-stationary continual learning environments.