Adapting Vision-Language Models Without Labels: A Comprehensive Survey

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) suffer from limited generalization in downstream tasks due to scarce labeled data, and existing surveys lack a systematic taxonomy of unsupervised adaptation methods. To address this gap, we propose the first unified classification framework for unsupervised VLM adaptation, categorizing approaches into four paradigms based on data availability: data-free adaptation, cross-domain transfer, batch-time adaptation, and online-time adaptation. Within this framework, we conduct a structured analysis of core techniques—including knowledge distillation, unsupervised domain adaptation, test-time adaptation, and continual learning—while integrating modality alignment optimization strategies. We further construct a comprehensive methodology map covering major benchmarks and application scenarios, and release an open-source literature repository. This survey fills a critical void in systematic research on unsupervised VLM adaptation, providing both theoretical foundations and practical guidance for efficient, low-resource VLM deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.
Problem

Research questions and friction points this paper is trying to address.

Survey unsupervised adaptation methods for Vision-Language Models
Classify approaches into four paradigms for data efficiency
Analyze methodologies and challenges in label-free VLM adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised adaptation methods for VLMs
Taxonomy based on unlabeled visual data
Four key paradigms for VLM adaptation
🔎 Similar Papers
No similar papers found.