🤖 AI Summary
Fine-tuning large-scale vision-language models (VLMs) for few-shot adaptation (FSA) is computationally prohibitive due to their scale and data scarcity.
Method: We propose a two-stage parameter-efficient fine-tuning (PEFT) framework: Stage 1 learns a task-specific feature extractor via PEFT, decoupling generic representations from task-specific knowledge; Stage 2 freezes this extractor and trains only a lightweight linear classifier. We further uncover a novel biphasic dynamic of PEFT in FSA and introduce a selective inference mechanism—during testing, only the text encoder adapter for novel classes is activated, while base-class embeddings are reused from the pretrained model.
Results: Evaluated across 11 datasets, 3 backbone architectures, and 2 FSA settings with fixed hyperparameters, our method matches or surpasses state-of-the-art performance and demonstrates significantly improved cross-scenario robustness.
📝 Abstract
An old-school recipe for training a classifier is to (i) learn a good feature extractor and (ii) optimize a linear layer atop. When only a handful of samples are available per category, as in Few-Shot Adaptation (FSA), data are insufficient to fit a large number of parameters, rendering the above impractical. This is especially true with large pre-trained Vision-Language Models (VLMs), which motivated successful research at the intersection of Parameter-Efficient Fine-tuning (PEFT) and FSA. In this work, we start by analyzing the learning dynamics of PEFT techniques when trained on few-shot data from only a subset of categories, referred to as the ``base'' classes. We show that such dynamics naturally splits into two distinct phases: (i) task-level feature extraction and (ii) specialization to the available concepts. To accommodate this dynamic, we then depart from prompt- or adapter-based methods and tackle FSA differently. Specifically, given a fixed computational budget, we split it to (i) learn a task-specific feature extractor via PEFT and (ii) train a linear classifier on top. We call this scheme Two-Stage Few-Shot Adaptation (2SFS). Differently from established methods, our scheme enables a novel form of selective inference at a category level, i.e., at test time, only novel categories are embedded by the adapted text encoder, while embeddings of base categories are available within the classifier. Results with fixed hyperparameters across two settings, three backbones, and eleven datasets, show that 2SFS matches or surpasses the state-of-the-art, while established methods degrade significantly across settings.