Enhancing Pretrained Model-based Continual Representation Learning via Guided Random Projection

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of randomly initialized projection layers in pre-trained model–based zero-shot class-incremental learning, where domain shift restricts representational capacity and high-dimensional expansion often leads to ill-conditioned feature matrices, undermining the stability of linear classifier updates. To overcome these issues, the authors propose SCL-MGSM, a novel approach that abandons random initialization and instead introduces a MemoryGuard supervision mechanism. This mechanism employs data-driven, progressive random basis selection to construct a compact, low-dimensional projection layer aligned with the target task. By ensuring numerical stability while enhancing representation adaptability, SCL-MGSM achieves significant performance gains over existing methods across multiple benchmarks, demonstrating its superior balance of expressiveness, stability, and generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Recent paradigms in Random Projection Layer (RPL)-based continual representation learning have demonstrated superior performance when building upon a pre-trained model (PTM). These methods insert a randomly initialized RPL after a PTM to enhance feature representation in the initial stage. Subsequently, a linear classification head is used for analytic updates in the continual learning stage. However, under severe domain gaps between pre-trained representations and target domains, a randomly initialized RPL exhibits limited expressivity under large domain shifts. While largely scaling up the RPL dimension can improve expressivity, it also induces an ill-conditioned feature matrix, thereby destabilizing the recursive analytic updates of the linear head. To this end, we propose the Stochastic Continual Learner with MemoryGuard Supervisory Mechanism (SCL-MGSM). Unlike random initialization, MGSM constructs the projection layer via a principled, data-guided mechanism that progressively selects target-aligned random bases to adapt the PTM representation to downstream tasks. This facilitates the construction of a compact yet expressive RPL while improving the numerical stability of analytic updates. Extensive experiments on multiple exemplar-free Class Incremental Learning (CIL) benchmarks demonstrate that SCL-MGSM achieves superior performance compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Random Projection
Pretrained Model
Domain Shift
Numerical Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided Random Projection
Continual Representation Learning
Pretrained Model Adaptation
MemoryGuard Supervisory Mechanism
Analytic Update Stability
R
Ruilin Li
Wuhan University
Heming Zou
Heming Zou
Tsinghua University
Machine Learning
X
Xiufeng Yan
China University of Mining and Technology
Z
Zheming Liang
University of Science and Technology of China
J
Jie Yang
Wuhan University
Chenliang Li
Chenliang Li
School of Cyber Science and Engineering, Wuhan University
Information RetrievalData MiningNatural Language ProcessingSocial Media
X
Xue Yang
Shanghai Jiao Tong University