VISTA-Bench: Do Vision-Language Models Really Understand Visualized Text as Well as Pure Text?

๐Ÿ“… 2026-02-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the unresolved question of whether current vision-language models comprehend visual text in images as effectively as symbolic (plain) text. To investigate this, the authors introduce VISTA-Bench, the first benchmark designed to systematically evaluate model performance on semantically identical text presented in different formsโ€”pixelated (visual) versus symbolic (plain). Through carefully designed multimodal perception and reasoning tasks, controlled text rendering, and cross-modal consistency assessments, the study evaluates over twenty state-of-the-art models. The results reveal a significant performance gap: models consistently underperform on visual text compared to plain text, with the disparity widening as perceptual difficulty increases. This highlights a critical sensitivity of existing models to the modality of text presentation, underscoring limitations in their ability to achieve true cross-modal semantic equivalence.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language Models (VLMs) have achieved impressive performance in cross-modal understanding across textual and visual inputs, yet existing benchmarks predominantly focus on pure-text queries. In real-world scenarios, language also frequently appears as visualized text embedded in images, raising the question of whether current VLMs handle such input requests comparably. We introduce VISTA-Bench, a systematic benchmark from multimodal perception, reasoning, to unimodal understanding domains. It evaluates visualized text understanding by contrasting pure-text and visualized-text questions under controlled rendering conditions. Extensive evaluation of over 20 representative VLMs reveals a pronounced modality gap: models that perform well on pure-text queries often degrade substantially when equivalent semantic content is presented as visualized text. This gap is further amplified by increased perceptual difficulty, highlighting sensitivity to rendering variations despite unchanged semantics. Overall, VISTA-Bench provides a principled evaluation framework to diagnose this limitation and to guide progress toward more unified language representations across tokenized text and pixels. The source dataset is available at https://github.com/QingAnLiu/VISTA-Bench.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Visualized Text
Modality Gap
Cross-modal Understanding
Benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Visualized Text Understanding
Modality Gap
Benchmarking
Cross-modal Representation
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qing'an Liu
School of Artificial Intelligence, Dalian University of Technology, Dalian, China
J
Juntong Feng
School of Artificial Intelligence, Dalian University of Technology, Dalian, China
Yuhao Wang
Yuhao Wang
Dalian University of Technology
Computer VisionMulti-modal FusionReID
Xinzhe Han
Xinzhe Han
University of Chinese Academy of Sciences
Y
Yujie Cheng
School of Artificial Intelligence, Dalian University of Technology, Dalian, China
Yue Zhu
Yue Zhu
IBM Research
Performance OptimizationI/OStorageCloud
Haiwen Diao
Haiwen Diao
Nanyang Technological University
Computer VisionVision-and-LanguageTransfer LearningMultimodal LLM
Yunzhi Zhuge
Yunzhi Zhuge
Dalian University of Technology
Computer Vision
H
Huchuan Lu
School of Artificial Intelligence, Dalian University of Technology, Dalian, China