Fine-tuning Pre-trained Vision-Language Models in a Human-Annotation-Free Manner

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Collaborative Fine-Tuning (CoFT), a framework for efficiently adapting pre-trained vision-language models to downstream tasks in the absence of human-annotated data. CoFT leverages unlabeled data through an unsupervised fine-tuning strategy based on a dual-model cross-modal collaboration mechanism. It introduces sample-dependent positive and negative textual prompts to model pseudo-label quality, thereby eliminating the need for handcrafted confidence thresholds. A lightweight visual adaptation module and a collaborative pseudo-label filtering mechanism are designed to mitigate confirmation bias and recover informative low-confidence samples. The approach integrates a two-stage training strategy with parameter-efficient fine-tuning. Experimental results demonstrate that CoFT significantly outperforms existing unsupervised methods across multiple downstream tasks and even surpasses several few-shot supervised baselines.

Technology Category

Application Category

📝 Abstract
Large-scale vision-language models (VLMs) such as CLIP exhibit strong zero-shot generalization, but adapting them to downstream tasks typically requires costly labeled data. Existing unsupervised self-training methods rely on pseudo-labeling, yet often suffer from unreliable confidence filtering, confirmation bias, and underutilization of low-confidence samples. We propose Collaborative Fine-Tuning (CoFT), an unsupervised adaptation framework that leverages unlabeled data through a dual-model, cross-modal collaboration mechanism. CoFT introduces a dual-prompt learning strategy with positive and negative textual prompts to explicitly model pseudo-label cleanliness in a sample-dependent manner, removing the need for hand-crafted thresholds or noise assumptions. The negative prompt also regularizes lightweight visual adaptation modules, improving robustness under noisy supervision. CoFT employs a two-phase training scheme, transitioning from parameter-efficient fine-tuning on high-confidence samples to full fine-tuning guided by collaboratively filtered pseudo-labels. Building on CoFT, CoFT+ further enhances adaptation via iterative fine-tuning, momentum contrastive learning, and LLM-generated prompts. Extensive experiments demonstrate consistent gains over existing unsupervised methods and even few-shot supervised baselines.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
unsupervised adaptation
pseudo-labeling
human-annotation-free
fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

unsupervised adaptation
vision-language models
pseudo-label filtering
dual-prompt learning
parameter-efficient fine-tuning
🔎 Similar Papers
No similar papers found.
Qian-Wei Wang
Qian-Wei Wang
Tsinghua University
machine learning
G
Guanghao Meng
Tsinghua Shenzhen International Graduate School, Tsinghua University, Institute of Perceptual Intelligence, Peng Cheng Laboratory
R
Ren Cai
Peking University Shenzhen Graduate School, Peking University, Institute of Perceptual Intelligence, Peng Cheng Laboratory
Yaguang Song
Yaguang Song
Peng Cheng Laboratory
Deep LearningMulti-Modal Pre-training
Shu-Tao Xia
Shu-Tao Xia
SIGS, Tsinghua University
coding and information theorymachine learningcomputer visionAI security