Parameter-Efficient Augment Plugin for Class-Incremental Learning

๐Ÿ“… 2025-12-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the dual challenges of the stability-plasticity dilemma and parameter explosion in class-incremental learning (CIL), this paper proposes DLC, a plug-and-play low-rank adaptation paradigm. DLC innovatively integrates LoRA as a lightweight, task-specific residual module directly into deep layers of the base model to enhance task-adaptive representations. A lightweight gating unit dynamically aggregates outputs from multiple task-specific LoRA adapters, effectively mitigating cross-task interference. Crucially, DLC requires no pretraining and is fully compatible with standard CIL strategies such as replay or knowledge distillation. On ImageNet-100, DLC introduces only 4% additional parameters over ResNet-18 yet achieves an 8% absolute accuracy gain; under fixed memory budgets, it significantly outperforms existing state-of-the-art methods. Our core contribution lies in pioneering the plug-and-play, modular application of LoRA in CILโ€”achieving high accuracy, minimal parameter overhead, and strong scalability across tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing class-incremental learning (CIL) approaches based on replay or knowledge distillation are often constrained by forgetting or the stability-plasticity dilemma. Some expansion-based approaches could achieve higher accuracy. However, they always require significant parameter increases. In this paper, we propose a plugin extension paradigm termed the Deployment of extra LoRA Components (DLC) for non-pre-trained CIL scenarios.We treat the feature extractor trained through replay or distillation as a base model with rich knowledge. For each task, we use Low-Rank Adaptation (LoRA) to inject task-specific residuals into the base model's deep layers. During inference, representations with task-specific residuals are aggregated to produce classification predictions. To mitigate interference from non-target LoRA plugins, we introduce a lightweight weighting unit. This unit learns to assign importance scores to different LoRA-tuned representations. Like downloadable contents in software, our method serves as a plug-and-play enhancement that efficiently extends the base methods. Remarkably, on the large-scale ImageNet-100, with merely 4 % of the parameters of a standard ResNet-18, our DLC model achieves a significant 8 % improvement in accuracy, demonstrating exceptional efficiency. Moreover, it could surpass state-of-the-art methods under the fixed memory budget.
Problem

Research questions and friction points this paper is trying to address.

Addresses forgetting and stability-plasticity dilemma in class-incremental learning
Reduces parameter growth in expansion-based incremental learning methods
Enhances base models with plug-and-play task-specific adaptations efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA components inject task-specific residuals for CIL
Lightweight weighting unit mitigates interference from non-target plugins
Plug-and-play enhancement improves accuracy with minimal parameter increase
๐Ÿ”Ž Similar Papers
No similar papers found.
Zhiming Xu
Zhiming Xu
University of Virginia
llm inferencemachine learning system
B
Baile Xu
National Key Laboratory for Novel Software Technology, Nanjing University, China
J
Jian Zhao
Department of Computer Science and Technology, Nanjing University, China
Furao Shen
Furao Shen
Department of Computer Science & Technology, Nanjing University
Neural NetworksRobotic Intelligence
Suorong Yang
Suorong Yang
Nanjing University
Computer VisionDeep LearningMultimodal Learning