🤖 AI Summary
This work addresses the performance degradation commonly observed when fine-tuning open-weight CLIP models via self-supervised methods, which often stems from optimization statistic shifts and interference from false negative samples. To tackle this issue, the authors propose TuneCLIP, a framework that introduces a theoretically motivated warm-up phase to recover stable optimization statistics and incorporates an improved contrastive loss to mitigate the adverse effects of false negatives. This approach provides the first systematic solution to performance collapse in fine-tuning such models, without requiring retraining. TuneCLIP significantly enhances cross-task generalization, achieving gains of up to 2.5% on ImageNet and out-of-distribution benchmarks and 1.2% on DataComp, thereby establishing a new, efficient post-pretraining adaptation baseline for prominent open models such as SigLIP.
📝 Abstract
CLIP has become a cornerstone of multimodal representation learning, yet improving its performance typically requires a prohibitively costly process of training from scratch on billions of samples. We ask a different question: Can we improve the performance of open-weight CLIP models across various downstream tasks using only existing self-supervised datasets? Unlike supervised fine-tuning, which adapts a pretrained model to a single downstream task, our setting seeks to improve general performance across various tasks. However, as both our experiments and prior studies reveal, simply applying standard training protocols starting from an open-weight CLIP model often fails, leading to performance degradation. In this paper, we introduce TuneCLIP, a self-supervised fine-tuning framework that overcomes the performance degradation. TuneCLIP has two key components: (1) a warm-up stage of recovering optimization statistics to reduce cold-start bias, inspired by theoretical analysis, and (2) a fine-tuning stage of optimizing a new contrastive loss to mitigate the penalization on false negative pairs. Our extensive experiments show that TuneCLIP consistently improves performance across model architectures and scales. Notably, it elevates leading open-weight models like SigLIP (ViT-B/16), achieving gains of up to +2.5% on ImageNet and related out-of-distribution benchmarks, and +1.2% on the highly competitive DataComp benchmark, setting a new strong baseline for efficient post-pretraining adaptation.