Generalizing Vision-Language Models to Novel Domains: A Comprehensive Survey

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit limited cross-domain generalization performance, necessitating systematic transfer strategies. Method: This work presents a comprehensive survey of VLM generalization to novel domains, proposing a modular taxonomy that unifies prompt-based, parameter-based, and feature-based transfer paradigms. It clarifies the evolutionary relationship between VLMs and multimodal large language models (MLLMs), and conducts empirical evaluation—integrating large-scale pretraining with multimodal alignment mechanisms—across mainstream benchmarks to compare methods and analyze performance. Contribution/Results: The study establishes the first technical roadmap for VLM generalization research, constructing a structured knowledge framework tailored to downstream tasks. It provides both a theoretical foundation and practical guidelines for multimodal transfer learning, advancing systematic methodology in this rapidly evolving field.

Technology Category

Application Category

📝 Abstract
Recently, vision-language pretraining has emerged as a transformative technique that integrates the strengths of both visual and textual modalities, resulting in powerful vision-language models (VLMs). Leveraging web-scale pretraining data, these models exhibit strong zero-shot capabilities. However, their performance often deteriorates when confronted with domain-specific or specialized generalization tasks. To address this, a growing body of research focuses on transferring or generalizing the rich knowledge embedded in VLMs to various downstream applications. This survey aims to comprehensively summarize the generalization settings, methodologies, benchmarking and results in VLM literatures. Delving into the typical VLM structures, current literatures are categorized into prompt-based, parameter-based and feature-based methods according to the transferred modules. The differences and characteristics in each category are furthered summarized and discussed by revisiting the typical transfer learning (TL) settings, providing novel interpretations for TL in the era of VLMs. Popular benchmarks for VLM generalization are further introduced with thorough performance comparisons among the reviewed methods. Following the advances in large-scale generalizable pretraining, this survey also discusses the relations and differences between VLMs and up-to-date multimodal large language models (MLLM), e.g., DeepSeek-VL. By systematically reviewing the surging literatures in vision-language research from a novel and practical generalization prospective, this survey contributes to a clear landscape of current and future multimodal researches.
Problem

Research questions and friction points this paper is trying to address.

Improving VLM performance in domain-specific generalization tasks
Transferring VLM knowledge to diverse downstream applications
Comparing VLM generalization methods and benchmarks systematically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging web-scale pretraining for VLMs
Categorizing methods into prompt, parameter, feature
Comparing VLMs with multimodal large language models
🔎 Similar Papers
No similar papers found.
Xinyao Li
Xinyao Li
University of Electronic Science and Technology of China
J
Jingjing Li
University of Electronic Science and Technology of China, Chengdu 610054, China
Fengling Li
Fengling Li
University of Technology Sydney
Cross-modal AnalysisDomain AdaptationMultimodal Learning
L
Lei Zhu
Tongji University, Shanghai 200070, China
Y
Yang Yang
University of Electronic Science and Technology of China, Chengdu 610054, China
H
Heng Tao Shen
University of Electronic Science and Technology of China, Chengdu 610054, China