Singular Value Fine-tuning for Few-Shot Class-Incremental Learning

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Few-Shot Class-Incremental Learning (FSCIL) faces two core challenges: severe overfitting due to extremely limited samples per novel class, and catastrophic forgetting of previously learned classes. To address these, we propose **Singular Value Decomposition Tuning (SVD-Tuning)**—a novel parameter-efficient fine-tuning paradigm that freezes the left and right singular vectors of pretrained weight matrices and learns only task-specific diagonal singular value matrices. This design reduces trainable parameters to less than 0.1%, striking an improved balance between mitigating forgetting and suppressing overfitting. Evaluated on four standard FSCIL benchmarks, SVD-Tuning consistently outperforms mainstream PEFT methods—including LoRA and Prompt Tuning—in accuracy, forgetting resistance, and generalization. Notably, it is the first work to systematically identify and resolve the overlooked overfitting issue in large-model FSCIL, establishing a new state-of-the-art for parameter-efficient continual learning under extreme data scarcity.

Technology Category

Application Category

📝 Abstract
Class-Incremental Learning (CIL) aims to prevent catastrophic forgetting of previously learned classes while sequentially incorporating new ones. The more challenging Few-shot CIL (FSCIL) setting further complicates this by providing only a limited number of samples for each new class, increasing the risk of overfitting in addition to standard CIL challenges. While catastrophic forgetting has been extensively studied, overfitting in FSCIL, especially with large foundation models, has received less attention. To fill this gap, we propose the Singular Value Fine-tuning for FSCIL (SVFCL) and compared it with existing approaches for adapting foundation models to FSCIL, which primarily build on Parameter Efficient Fine-Tuning (PEFT) methods like prompt tuning and Low-Rank Adaptation (LoRA). Specifically, SVFCL applies singular value decomposition to the foundation model weights, keeping the singular vectors fixed while fine-tuning the singular values for each task, and then merging them. This simple yet effective approach not only alleviates the forgetting problem but also mitigates overfitting more effectively while significantly reducing trainable parameters. Extensive experiments on four benchmark datasets, along with visualizations and ablation studies, validate the effectiveness of SVFCL. The code will be made available.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in class-incremental learning
Mitigates overfitting in few-shot class-incremental learning
Reduces trainable parameters in foundation model adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Singular Value Fine-tuning for Few-shot CIL
Fixed singular vectors, fine-tuned singular values
Reduces trainable parameters, mitigates overfitting
🔎 Similar Papers
No similar papers found.