MePo: Meta Post-Refinement for Rehearsal-Free General Continual Learning

๐Ÿ“… 2026-02-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenges of online data streams, ambiguous task boundaries, and limited temporal information integration under single-pass settings in general continual learning. To this end, the authors propose a replay-free post-hoc optimization method that, for the first time, incorporates neuroscientific principles of metaplasticity and memory reconsolidation into continual learning. By constructing pseudo-task sequences on top of pretrained models and employing a bilevel meta-learning framework, the approach leverages a meta-covariance matrix as a geometric reference in the representation space to enable rapid adaptation and robust output alignment. This plug-and-play strategy achieves performance gains of 15.10%, 13.36%, and 12.56% on CIFAR-100, ImageNet-R, and CUB-200, respectively, demonstrating strong compatibility with diverse pretrained checkpoints and significantly enhancing model adaptability in dynamic environments.

Technology Category

Application Category

๐Ÿ“ Abstract
To cope with uncertain changes of the external world, intelligent systems must continually learn from complex, evolving environments and respond in real time. This ability, collectively known as general continual learning (GCL), encapsulates practical challenges such as online datastreams and blurry task boundaries. Although leveraging pretrained models (PTMs) has greatly advanced conventional continual learning (CL), these methods remain limited in reconciling the diverse and temporally mixed information along a single pass, resulting in sub-optimal GCL performance. Inspired by meta-plasticity and reconstructive memory in neuroscience, we introduce here an innovative approach named Meta Post-Refinement (MePo) for PTMs-based GCL. This approach constructs pseudo task sequences from pretraining data and develops a bi-level meta-learning paradigm to refine the pretrained backbone, which serves as a prolonged pretraining phase but greatly facilitates rapid adaptation of representation learning to downstream GCL tasks. MePo further initializes a meta covariance matrix as the reference geometry of pretrained representation space, enabling GCL to exploit second-order statistics for robust output alignment. MePo serves as a plug-in strategy that achieves significant performance gains across a variety of GCL benchmarks and pretrained checkpoints in a rehearsal-free manner (e.g., 15.10\%, 13.36\%, and 12.56\% on CIFAR-100, ImageNet-R, and CUB-200 under Sup-21/1K). Our source code is available at \href{https://github.com/SunGL001/MePo}{MePo}
Problem

Research questions and friction points this paper is trying to address.

General Continual Learning
Rehearsal-Free
Pretrained Models
Online Datastreams
Task Boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta Post-Refinement
General Continual Learning
Pretrained Models
Meta-Learning
Rehearsal-Free
๐Ÿ”Ž Similar Papers
No similar papers found.