🤖 AI Summary
To address the adaptation challenge of contrastive pre-trained vision-language models (e.g., CLIP) for few-shot classification, this paper proposes a lightweight and efficient fine-tuning method that updates only the final projection matrix of the visual encoder. Our key contributions are: (i) the first demonstration that optimizing solely this low-dimensional projection layer—without modifying the text encoder or introducing auxiliary modules—outperforms mainstream adaptation strategies; and (ii) the introduction of an L2-distance regularization between the pre-trained and fine-tuned projection matrices, which significantly enhances generalization and robustness. The method drastically reduces trainable parameters and computational overhead. It achieves state-of-the-art performance across 11 standard few-shot benchmarks and demonstrates superior results on challenging tasks including cross-domain transfer, base-to-novel class generalization, and test-time adaptation.
📝 Abstract
We consider the problem of adapting a contrastively pretrained vision-language model like CLIP (Radford et al., 2021) for few-shot classification. The literature addresses this problem by learning a linear classifier of the frozen visual features, optimizing word embeddings, or learning external feature adapters. This paper introduces an alternative way for CLIP adaptation without adding 'external' parameters to optimize. We find that simply fine-tuning the last projection matrix of the vision encoder leads to performance better than all baselines. Furthermore, we show that regularizing training with the distance between the fine-tuned and pretrained matrices adds reliability for adapting CLIP. This simple approach, coined ProLIP, yields state-of-the-art performance on 11 few-shot classification benchmarks, few-shot domain generalization, cross-dataset transfer, base-to-new class generalization, and test-time adaptation. Code will be made available at: https://github.com/astra-vision/ProLIP .