Fine-Tuning CLIP's Last Visual Projector: A Few-Shot Cornucopia

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the adaptation challenge of contrastive pre-trained vision-language models (e.g., CLIP) for few-shot classification, this paper proposes a lightweight and efficient fine-tuning method that updates only the final projection matrix of the visual encoder. Our key contributions are: (i) the first demonstration that optimizing solely this low-dimensional projection layer—without modifying the text encoder or introducing auxiliary modules—outperforms mainstream adaptation strategies; and (ii) the introduction of an L2-distance regularization between the pre-trained and fine-tuned projection matrices, which significantly enhances generalization and robustness. The method drastically reduces trainable parameters and computational overhead. It achieves state-of-the-art performance across 11 standard few-shot benchmarks and demonstrates superior results on challenging tasks including cross-domain transfer, base-to-novel class generalization, and test-time adaptation.

Technology Category

Application Category

📝 Abstract
We consider the problem of adapting a contrastively pretrained vision-language model like CLIP (Radford et al., 2021) for few-shot classification. The literature addresses this problem by learning a linear classifier of the frozen visual features, optimizing word embeddings, or learning external feature adapters. This paper introduces an alternative way for CLIP adaptation without adding 'external' parameters to optimize. We find that simply fine-tuning the last projection matrix of the vision encoder leads to performance better than all baselines. Furthermore, we show that regularizing training with the distance between the fine-tuned and pretrained matrices adds reliability for adapting CLIP. This simple approach, coined ProLIP, yields state-of-the-art performance on 11 few-shot classification benchmarks, few-shot domain generalization, cross-dataset transfer, base-to-new class generalization, and test-time adaptation. Code will be made available at: https://github.com/astra-vision/ProLIP .
Problem

Research questions and friction points this paper is trying to address.

Adapting CLIP for few-shot classification without external parameters
Fine-tuning vision encoder's embedding projection matrix improves performance
ProLIP achieves state-of-the-art in few-shot benchmarks and domain generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes CLIP's visual embedding projection matrix
Regularizes training with matrix distance for stability
Achieves state-of-the-art few-shot classification performance
🔎 Similar Papers
No similar papers found.