DRL: Discriminative Representation Learning with Parallel Adapters for Class Incremental Learning

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing three key challenges in replay-free continual learning—escalating model complexity, non-smooth representation shift, and misalignment between stage-wise optimization and global inference—this paper proposes the Discriminative Representation Learning (DRL) framework. Methodologically, DRL introduces: (1) a lightweight parallel adapter network for parameter-efficient fine-tuning; (2) a transition gate mechanism to ensure smooth cross-task representation inheritance; and (3) decoupled anchor supervision, leveraging virtual positive/negative anchors to jointly constrain the feature space and unify discriminative structure across incremental stages. Evaluated on six standard benchmarks, DRL consistently surpasses state-of-the-art methods, achieving superior incremental accuracy and enhanced generalization consistency, while maintaining high training and inference efficiency throughout the entire learning process.

Technology Category

Application Category

📝 Abstract
With the excellent representation capabilities of Pre-Trained Models (PTMs), remarkable progress has been made in non-rehearsal Class-Incremental Learning (CIL) research. However, it remains an extremely challenging task due to three conundrums: increasingly large model complexity, non-smooth representation shift during incremental learning and inconsistency between stage-wise sub-problem optimization and global inference. In this work, we propose the Discriminative Representation Learning (DRL) framework to specifically address these challenges. To conduct incremental learning effectively and yet efficiently, the DRL's network, called Incremental Parallel Adapter (IPA) network, is built upon a PTM and increasingly augments the model by learning a lightweight adapter with a small amount of parameter learning overhead in each incremental stage. The adapter is responsible for adapting the model to new classes, it can inherit and propagate the representation capability from the current model through parallel connection between them by a transfer gate. As a result, this design guarantees a smooth representation shift between different incremental stages. Furthermore, to alleviate inconsistency and enable comparable feature representations across incremental stages, we design the Decoupled Anchor Supervision (DAS). It decouples constraints of positive and negative samples by respectively comparing them with the virtual anchor. This decoupling promotes discriminative representation learning and aligns the feature spaces learned at different stages, thereby narrowing the gap between stage-wise local optimization over a subset of data and global inference across all classes. Extensive experiments on six benchmarks reveal that our DRL consistently outperforms other state-of-the-art methods throughout the entire CIL period while maintaining high efficiency in both training and inference phases.
Problem

Research questions and friction points this paper is trying to address.

Addresses large model complexity in class incremental learning
Solves non-smooth representation shift during incremental stages
Reduces inconsistency between stage-wise optimization and global inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel adapters enable lightweight incremental learning
Transfer gate ensures smooth representation shift
Decoupled anchor supervision aligns feature spaces
Jiawei Zhan
Jiawei Zhan
Peking University
J
Jun Liu
Tencent Youtu Lab, Shenzhen, China
Jinlong Peng
Jinlong Peng
Tencent Youtu Lab
Computer VisionDeep Learning
X
Xiaochen Chen
Tencent Youtu Lab, Shenzhen, China
Bin-Bin Gao
Bin-Bin Gao
Senior Researcher, Tencent YouTu
Computer VisionMachine LearningArtificial Intelligence
Y
Yong Liu
Tencent Youtu Lab, Shenzhen, China
C
Chengjie Wang
Tencent Youtu Lab, Shenzhen, China