Reading, Not Thinking: Understanding and Bridging the Modality Gap When Text Becomes Pixels in Multimodal LLMs

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the significant performance gap—termed the “modality gap”—observed when multimodal large language models process text embedded in images compared to direct textual input. Through a systematic evaluation of seven prominent models across five input modalities using both synthetic and real document images, the work reveals that image-based inputs primarily exacerbate reading errors rather than reasoning errors, and for the first time identifies a phenomenon of chain-of-thought collapse under visual input in certain models. To bridge this gap, the authors propose a novel self-distillation paradigm leveraging pure-text reasoning trajectories. Experiments demonstrate that this approach boosts accuracy on the GSM8K benchmark in image mode from 30.71% to 92.72%, achieves strong generalization on unseen benchmarks, and avoids catastrophic forgetting.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) can process text presented as images, yet they often perform worse than when the same content is provided as textual tokens. We systematically diagnose this "modality gap" by evaluating seven MLLMs across seven benchmarks in five input modes, spanning both synthetically rendered text and realistic document images from arXiv PDFs to Wikipedia pages. We find that the modality gap is task- and data-dependent. For example, math tasks degrade by over 60 points on synthetic renderings, while natural document images often match or exceed text-mode performance. Rendering choices such as font and resolution are strong confounds, with font alone swinging accuracy by up to 47 percentage points. To understand this, we conduct a grounded-theory error analysis of over 4,000 examples, revealing that image mode selectively amplifies reading errors (calculation and formatting failures) while leaving knowledge and reasoning errors largely unchanged, and that some models exhibit a chain-of-thought reasoning collapse under visual input. Motivated by these findings, we propose a self-distillation method that trains the model on its own pure text reasoning traces paired with image inputs, raising image-mode accuracy on GSM8K from 30.71% to 92.72% and transferring to unseen benchmarks without catastrophic forgetting. Overall, our study provides a systematic understanding of the modality gap and suggests a practical path toward improving visual text understanding in multimodal language models.
Problem

Research questions and friction points this paper is trying to address.

modality gap
multimodal LLMs
visual text understanding
reading errors
image-based text
Innovation

Methods, ideas, or system contributions that make the work stand out.

modality gap
multimodal LLMs
self-distillation
visual text understanding
chain-of-thought collapse
🔎 Similar Papers
No similar papers found.