🤖 AI Summary
This work addresses the prevalent issue of textual bias in multimodal large language models, which often over-rely on linguistic priors at the expense of visual input. The study is the first to disentangle this bias into internal corpus bias and external instruction bias. To systematically evaluate visual fidelity, the authors introduce V-FAT, a diagnostic benchmark comprising 4,026 samples, along with a three-level conflict assessment framework (L1–L3) and a Visual Robustness Score (VRS). Experiments across twelve state-of-the-art models reveal that, despite strong performance on standard benchmarks, their visual reasoning capabilities significantly degrade under high linguistic dominance, exposing critical limitations in genuine visual understanding.
📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on standard visual reasoning benchmarks. However, there is growing concern that these models rely excessively on linguistic shortcuts rather than genuine visual grounding, a phenomenon we term Text Bias. In this paper, we investigate the fundamental tension between visual perception and linguistic priors. We decouple the sources of this bias into two dimensions: Internal Corpus Bias, stemming from statistical correlations in pretraining, and External Instruction Bias, arising from the alignment-induced tendency toward sycophancy. To quantify this effect, we introduce V-FAT (Visual Fidelity Against Text-bias), a diagnostic benchmark comprising 4,026 VQA instances across six semantic domains. V-FAT employs a Three-Level Evaluation Framework that systematically increases the conflict between visual evidence and textual information: (L1) internal bias from atypical images, (L2) external bias from misleading instructions, and (L3) synergistic bias where both coincide. We introduce the Visual Robustness Score (VRS), a metric designed to penalize"lucky"linguistic guesses and reward true visual fidelity. Our evaluation of 12 frontier MLLMs reveals that while models excel in existing benchmarks, they experience significant visual collapse under high linguistic dominance.