Global Pre-fixing, Local Adjusting: A Simple yet Effective Contrastive Strategy for Continual Learning

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Catastrophic forgetting in continual learning stems primarily from inter-task and intra-task feature confusion. To address this, we propose a global-local collaborative contrastive learning framework: (1) globally, we introduce equiangular tight frames (ETFs) on the hypersphere to partition non-overlapping feature regions for distinct tasks, achieving task-level decoupling; (2) locally, we design an adjustable structure to optimize intra-class compactness and enhance intra-task discriminability. Our method employs a two-stage contrastive loss—global pre-fixed and local adaptive—requiring no modification to the backbone network and enabling plug-and-play deployment. Evaluated on multiple standard continual learning benchmarks, it significantly outperforms existing contrastive learning approaches, effectively mitigating feature confusion while jointly optimizing cross-task separability and intra-task compactness. The framework demonstrates strong transferability and generalization capability.

Technology Category

Application Category

📝 Abstract
Continual learning (CL) involves acquiring and accumulating knowledge from evolving tasks while alleviating catastrophic forgetting. Recently, leveraging contrastive loss to construct more transferable and less forgetful representations has been a promising direction in CL. Despite advancements, their performance is still limited due to confusion arising from both inter-task and intra-task features. To address the problem, we propose a simple yet effective contrastive strategy named extbf{G}lobal extbf{P}re-fixing, extbf{L}ocal extbf{A}djusting for extbf{S}upervised extbf{C}ontrastive learning (GPLASC). Specifically, to avoid task-level confusion, we divide the entire unit hypersphere of representations into non-overlapping regions, with the centers of the regions forming an inter-task pre-fixed extbf{E}quiangular extbf{T}ight extbf{F}rame (ETF). Meanwhile, for individual tasks, our method helps regulate the feature structure and form intra-task adjustable ETFs within their respective allocated regions. As a result, our method extit{simultaneously} ensures discriminative feature structures both between tasks and within tasks and can be seamlessly integrated into any existing contrastive continual learning framework. Extensive experiments validate its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Avoid task-level confusion in continual learning
Regulate feature structure within individual tasks
Ensure discriminative features between and within tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global Pre-fixing with Equiangular Tight Frame
Local Adjusting for intra-task feature regulation
Seamless integration into contrastive learning frameworks
🔎 Similar Papers
No similar papers found.
J
Jia Tang
Nanjing University of Aeronautics and Astronautics
X
Xinrui Wang
Nanjing University of Aeronautics and Astronautics
Songcan Chen
Songcan Chen
Nanjing University of Aeronautics & Astronautics
Machine LearningPattern recognition