Unlocking Prototype Potential: An Efficient Tuning Framework for Few-Shot Class-Incremental Learning

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of static prototypes in few-shot class-incremental learning, which are prone to representation bias from the backbone network and consequently hinder performance. To overcome this, the authors propose a novel paradigm that freezes the pre-trained feature extractor and instead fine-tunes learnable prototypes. They introduce a dual calibration mechanism—comprising class-specific and task-aware offset adjustments—that enables prototypes to dynamically adapt to new classes within a high-quality, fixed feature space. Remarkably, this approach requires only a minimal number of learnable parameters yet achieves substantial performance gains over existing methods across multiple benchmarks, significantly enhancing both discriminative capability and incremental learning efficacy.

Technology Category

Application Category

📝 Abstract
Few-shot class-incremental learning (FSCIL) seeks to continuously learn new classes from very limited samples while preserving previously acquired knowledge. Traditional methods often utilize a frozen pre-trained feature extractor to generate static class prototypes, which suffer from the inherent representation bias of the backbone. While recent prompt-based tuning methods attempt to adapt the backbone via minimal parameter updates, given the constraint of extreme data scarcity, the model's capacity to assimilate novel information and substantively enhance its global discriminative power is inherently limited. In this paper, we propose a novel shift in perspective: freezing the feature extractor while fine-tuning the prototypes. We argue that the primary challenge in FSCIL is not feature acquisition, but rather the optimization of decision regions within a static, high-quality feature space. To this end, we introduce an efficient prototype fine-tuning framework that evolves static centroids into dynamic, learnable components. The framework employs a dual-calibration method consisting of class-specific and task-aware offsets. These components function synergistically to improve the discriminative capacity of prototypes for ongoing incremental classes. Extensive results demonstrate that our method attains superior performance across multiple benchmarks while requiring minimal learnable parameters.
Problem

Research questions and friction points this paper is trying to address.

Few-Shot Class-Incremental Learning
Prototype Optimization
Representation Bias
Static Feature Space
Incremental Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

prototype fine-tuning
few-shot class-incremental learning
dual-calibration
static feature space
learnable prototypes
🔎 Similar Papers
No similar papers found.