🤖 AI Summary
Online continual learning (OCL) under privacy-sensitive constraints—where sample replay is strictly prohibited—remains a challenging open problem. Method: We propose Continual MultiPatches (CMP), a replay-free framework that processes data in a single-pass streaming fashion. CMP leverages multi-patch view augmentation, projects views into a shared feature space, and optimizes intra-class compactness and inter-class separability via self-supervised contrastive learning—without storing historical samples. A collapse-prevention mechanism further ensures representation quality. Contribution/Results: On standard OCL benchmarks, CMP substantially outperforms both replay-based state-of-the-art methods and existing self-supervised continual learning approaches. It is the first work to systematically demonstrate the feasibility and superiority of self-supervised continual learning under strict no-replay constraints, establishing a novel paradigm for privacy-critical applications.
📝 Abstract
Online Continual Learning (OCL) methods train a model on a non-stationary data stream where only a few examples are available at a time, often leveraging replay strategies. However, usage of replay is sometimes forbidden, especially in applications with strict privacy regulations. Therefore, we propose Continual MultiPatches (CMP), an effective plug-in for existing OCL self-supervised learning strategies that avoids the use of replay samples. CMP generates multiple patches from a single example and projects them into a shared feature space, where patches coming from the same example are pushed together without collapsing into a single point. CMP surpasses replay and other SSL-based strategies on OCL streams, challenging the role of replay as a go-to solution for self-supervised OCL.