Leveraging Complementary Embeddings for Replay Selection in Continual Learning with Small Buffers

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of memory-constrained continual learning by proposing Multi-Embedding Replay Selection (MERS), a novel sample selection strategy for replay buffers. Existing methods predominantly rely on supervised embeddings alone, overlooking the class-relevant information embedded in self-supervised representations. MERS uniquely integrates complementary supervised and self-supervised embeddings through a graph-based fusion mechanism to enhance sample selection under limited buffer capacity. Notably, the approach achieves superior performance without increasing model parameters or replay data volume. Evaluated on CIFAR-100 and TinyImageNet, MERS significantly outperforms state-of-the-art methods, effectively mitigating catastrophic forgetting—particularly in low-memory regimes—demonstrating its efficacy in resource-constrained continual learning scenarios.
📝 Abstract
Catastrophic forgetting remains a key challenge in Continual Learning (CL). In replay-based CL with severe memory constraints, performance critically depends on the sample selection strategy for the replay buffer. Most existing approaches construct memory buffers using embeddings learned under supervised objectives. However, class-agnostic, self-supervised representations often encode rich, class-relevant semantics that are overlooked. We propose a new method, Multiple Embedding Replay Selection, MERS, which replaces the buffer selection module with a graph-based approach that integrates both supervised and self-supervised embeddings. Empirical results show consistent improvements over SOTA selection strategies across a range of continual learning algorithms, with particularly strong gains in low-memory regimes. On CIFAR-100 and TinyImageNet, MERS outperforms single-embedding baselines without adding model parameters or increasing replay volume, making it a practical, drop-in enhancement for replay-based continual learning.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Catastrophic Forgetting
Replay Buffer
Sample Selection
Memory Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
replay selection
complementary embeddings
self-supervised learning
small buffer
🔎 Similar Papers
No similar papers found.
D
Danit Yanowsky
School of Computer Science and Engineering, The Hebrew University of Jerusalem
Daphna Weinshall
Daphna Weinshall
Professor of Computer Science, Hebrew University
computer visionmachine learningvisual perception