Compensating Distribution Drifts in Class-incremental Learning of Pre-trained Vision Transformers

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address feature distribution drift across tasks in class-incremental learning (CIL) with pretrained Vision Transformers (ViTs), caused by sequential fine-tuning, this paper proposes Sequential Learning with Distribution Compensation (SLDC). SLDC introduces learnable latent-space transformation operators—comprising both linear and weakly nonlinear variants—to explicitly align feature distributions across tasks, and integrates knowledge distillation to further suppress representation drift. This work is the first to systematically incorporate distribution alignment into CIL frameworks for pretrained ViTs. On multiple standard benchmarks, SLDC significantly outperforms existing sequential fine-tuning (SeqFT) methods; when combined with distillation, its performance approaches that of joint training, achieving near-offline accuracy. The core innovation lies in a lightweight, interpretable drift compensation mechanism that effectively balances flexibility and generalization.

Technology Category

Application Category

📝 Abstract
Recent advances have shown that sequential fine-tuning (SeqFT) of pre-trained vision transformers (ViTs), followed by classifier refinement using approximate distributions of class features, can be an effective strategy for class-incremental learning (CIL). However, this approach is susceptible to distribution drift, caused by the sequential optimization of shared backbone parameters. This results in a mismatch between the distributions of the previously learned classes and that of the updater model, ultimately degrading the effectiveness of classifier performance over time. To address this issue, we introduce a latent space transition operator and propose Sequential Learning with Drift Compensation (SLDC). SLDC aims to align feature distributions across tasks to mitigate the impact of drift. First, we present a linear variant of SLDC, which learns a linear operator by solving a regularized least-squares problem that maps features before and after fine-tuning. Next, we extend this with a weakly nonlinear SLDC variant, which assumes that the ideal transition operator lies between purely linear and fully nonlinear transformations. This is implemented using learnable, weakly nonlinear mappings that balance flexibility and generalization. To further reduce representation drift, we apply knowledge distillation (KD) in both algorithmic variants. Extensive experiments on standard CIL benchmarks demonstrate that SLDC significantly improves the performance of SeqFT. Notably, by combining KD to address representation drift with SLDC to compensate distribution drift, SeqFT achieves performance comparable to joint training across all evaluated datasets. Code: https://github.com/raoxuan98-hash/sldc.git.
Problem

Research questions and friction points this paper is trying to address.

Addressing distribution drift in class-incremental learning of vision transformers
Aligning feature distributions across tasks to mitigate sequential optimization effects
Compensating drift through linear and weakly nonlinear transition operators with distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space transition operator for drift compensation
Linear and weakly nonlinear mapping variants
Knowledge distillation combined with distribution alignment
🔎 Similar Papers
No similar papers found.