🤖 AI Summary
In class-incremental learning (CIL), deep models suffer from catastrophic forgetting when continuously acquiring new classes. To address this, we propose an adaptive weighted parameter fusion framework that—uniquely—integrates CLIP’s vision-language priors into incremental parameter fusion. Our method employs a learnable weight generation network to dynamically balance distribution alignment and inter-class separability in the parameter space, while enforcing inter-task distribution consistency to mitigate data shift across increments. This enables discriminative fusion of old and new task knowledge without compromising backward transfer. Evaluated on multiple standard CIL benchmarks, our approach achieves average accuracy gains of 3.2–5.7% over state-of-the-art methods, significantly alleviating forgetting while preserving or even improving recognition performance on previously learned classes.
📝 Abstract
Class-incremental Learning (CIL) enables the model to incrementally absorb knowledge from new classes and build a generic classifier across all previously encountered classes. When the model optimizes with new classes, the knowledge of previous classes is inevitably erased, leading to catastrophic forgetting. Addressing this challenge requires making a trade-off between retaining old knowledge and accommodating new information. However, this balancing process often requires sacrificing some information, which can lead to a partial loss in the model's ability to discriminate between classes. To tackle this issue, we design the adaptive weighted parameter fusion with Contrastive Language-Image Pre-training (CLIP), which not only takes into account the variability of the data distribution of different tasks, but also retains all the effective information of the parameter matrix to the greatest extent. In addition, we introduce a balance factor that can balance the data distribution alignment and distinguishability of adjacent tasks. Experimental results on several traditional benchmarks validate the superiority of the proposed method.