🤖 AI Summary
This work addresses the challenge of memory-constrained continual learning by proposing Multi-Embedding Replay Selection (MERS), a novel sample selection strategy for replay buffers. Existing methods predominantly rely on supervised embeddings alone, overlooking the class-relevant information embedded in self-supervised representations. MERS uniquely integrates complementary supervised and self-supervised embeddings through a graph-based fusion mechanism to enhance sample selection under limited buffer capacity. Notably, the approach achieves superior performance without increasing model parameters or replay data volume. Evaluated on CIFAR-100 and TinyImageNet, MERS significantly outperforms state-of-the-art methods, effectively mitigating catastrophic forgetting—particularly in low-memory regimes—demonstrating its efficacy in resource-constrained continual learning scenarios.
📝 Abstract
Catastrophic forgetting remains a key challenge in Continual Learning (CL). In replay-based CL with severe memory constraints, performance critically depends on the sample selection strategy for the replay buffer. Most existing approaches construct memory buffers using embeddings learned under supervised objectives. However, class-agnostic, self-supervised representations often encode rich, class-relevant semantics that are overlooked. We propose a new method, Multiple Embedding Replay Selection, MERS, which replaces the buffer selection module with a graph-based approach that integrates both supervised and self-supervised embeddings. Empirical results show consistent improvements over SOTA selection strategies across a range of continual learning algorithms, with particularly strong gains in low-memory regimes. On CIFAR-100 and TinyImageNet, MERS outperforms single-embedding baselines without adding model parameters or increasing replay volume, making it a practical, drop-in enhancement for replay-based continual learning.