Hidden in plain sight: VLMs overlook their visual representations

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical modality fusion imbalance in vision-language models (VLMs) for vision-centric tasks: their visual understanding capability degrades substantially—often performing near chance level on certain tasks and significantly underperforming the underlying visual encoder baseline. The primary bottleneck is the language model component, which impedes effective integration of visual information. To address this, we present the first systematic diagnosis of VLM visual failure, introducing a multi-dimensional analytical framework comprising visual representation degradation detection, prompt robustness evaluation, and cross-module attribution analysis. We further construct a novel benchmark suite specifically designed for rigorous visual capability assessment. Experimental results demonstrate that VLMs suffer over 50% performance drops relative to direct visual encoder outputs on tasks such as depth estimation and pixel-level correspondence. This work establishes both theoretical foundations and practical tools for VLM architecture refinement and trustworthy visual evaluation.

Technology Category

Application Category

📝 Abstract
Language provides a natural interface to specify and evaluate performance on visual tasks. To realize this possibility, vision language models (VLMs) must successfully integrate visual and linguistic information. Our work compares VLMs to a direct readout of their visual encoders to understand their ability to integrate across these modalities. Across a series of vision-centric benchmarks (e.g., depth estimation, correspondence), we find that VLMs perform substantially worse than their visual encoders, dropping to near-chance performance. We investigate these results through a series of analyses across the entire VLM: namely 1) the degradation of vision representations, 2) brittleness to task prompt, and 3) the language model's role in solving the task. We find that the bottleneck in performing these vision-centric tasks lies in this third category; VLMs are not effectively using visual information easily accessible throughout the entire model, and they inherit the language priors present in the LLM. Our work helps diagnose the failure modes of open-source VLMs, and presents a series of evaluations useful for future investigations into visual understanding within VLMs.
Problem

Research questions and friction points this paper is trying to address.

VLMs fail to effectively integrate visual and linguistic information
VLMs perform worse than visual encoders on vision tasks
VLMs inherit language priors instead of using visual data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares VLMs to visual encoders directly
Analyzes vision representation degradation issues
Identifies language model as main bottleneck
🔎 Similar Papers
No similar papers found.