🤖 AI Summary
Catastrophic forgetting in continual learning stems primarily from inter-task and intra-task feature confusion. To address this, we propose a global-local collaborative contrastive learning framework: (1) globally, we introduce equiangular tight frames (ETFs) on the hypersphere to partition non-overlapping feature regions for distinct tasks, achieving task-level decoupling; (2) locally, we design an adjustable structure to optimize intra-class compactness and enhance intra-task discriminability. Our method employs a two-stage contrastive loss—global pre-fixed and local adaptive—requiring no modification to the backbone network and enabling plug-and-play deployment. Evaluated on multiple standard continual learning benchmarks, it significantly outperforms existing contrastive learning approaches, effectively mitigating feature confusion while jointly optimizing cross-task separability and intra-task compactness. The framework demonstrates strong transferability and generalization capability.
📝 Abstract
Continual learning (CL) involves acquiring and accumulating knowledge from evolving tasks while alleviating catastrophic forgetting. Recently, leveraging contrastive loss to construct more transferable and less forgetful representations has been a promising direction in CL. Despite advancements, their performance is still limited due to confusion arising from both inter-task and intra-task features. To address the problem, we propose a simple yet effective contrastive strategy named extbf{G}lobal extbf{P}re-fixing, extbf{L}ocal extbf{A}djusting for extbf{S}upervised extbf{C}ontrastive learning (GPLASC). Specifically, to avoid task-level confusion, we divide the entire unit hypersphere of representations into non-overlapping regions, with the centers of the regions forming an inter-task pre-fixed extbf{E}quiangular extbf{T}ight extbf{F}rame (ETF). Meanwhile, for individual tasks, our method helps regulate the feature structure and form intra-task adjustable ETFs within their respective allocated regions. As a result, our method extit{simultaneously} ensures discriminative feature structures both between tasks and within tasks and can be seamlessly integrated into any existing contrastive continual learning framework. Extensive experiments validate its effectiveness.