Adaptive Weighted Parameter Fusion with CLIP for Class-Incremental Learning

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In class-incremental learning (CIL), deep models suffer from catastrophic forgetting when continuously acquiring new classes. To address this, we propose an adaptive weighted parameter fusion framework that—uniquely—integrates CLIP’s vision-language priors into incremental parameter fusion. Our method employs a learnable weight generation network to dynamically balance distribution alignment and inter-class separability in the parameter space, while enforcing inter-task distribution consistency to mitigate data shift across increments. This enables discriminative fusion of old and new task knowledge without compromising backward transfer. Evaluated on multiple standard CIL benchmarks, our approach achieves average accuracy gains of 3.2–5.7% over state-of-the-art methods, significantly alleviating forgetting while preserving or even improving recognition performance on previously learned classes.

Technology Category

Application Category

📝 Abstract
Class-incremental Learning (CIL) enables the model to incrementally absorb knowledge from new classes and build a generic classifier across all previously encountered classes. When the model optimizes with new classes, the knowledge of previous classes is inevitably erased, leading to catastrophic forgetting. Addressing this challenge requires making a trade-off between retaining old knowledge and accommodating new information. However, this balancing process often requires sacrificing some information, which can lead to a partial loss in the model's ability to discriminate between classes. To tackle this issue, we design the adaptive weighted parameter fusion with Contrastive Language-Image Pre-training (CLIP), which not only takes into account the variability of the data distribution of different tasks, but also retains all the effective information of the parameter matrix to the greatest extent. In addition, we introduce a balance factor that can balance the data distribution alignment and distinguishability of adjacent tasks. Experimental results on several traditional benchmarks validate the superiority of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Balancing old knowledge retention with new class learning
Preventing catastrophic forgetting in incremental learning
Enhancing class discrimination while adapting to new data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive weighted parameter fusion with CLIP
Balance factor for data distribution alignment
Retains all effective parameter matrix information
🔎 Similar Papers
No similar papers found.
Juncen Guo
Juncen Guo
Fudan University
Incremental LearningContinual Learning
Xiaoguang Zhu
Xiaoguang Zhu
Postdoc Researcher, University of California, Davis
AI for HealthComputer VisionImage RetrievalVideo Analysis
L
Liangyu Teng
Fudan University
H
Hao Yang
Fudan University
J
Jing Liu
The University of British Columbia
Y
Yang Liu
Soochow University
L
Liang Song
Fudan University