Deeper Thought, Weaker Aim: Understanding and Mitigating Perceptual Impairment during Reasoning in Multimodal Large Language Models

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that multimodal large language models often suffer from degraded perception of critical image regions during multi-step reasoning due to visual attention dispersion. For the first time, this study establishes a clear link between such attention dispersion and diminished perceptual performance. To mitigate this issue, the authors propose Visual Region-Guided Attention (VRGA), a training-free mechanism that dynamically reweights attention using an entropy-based focusing criterion to enhance emphasis on question-relevant visual regions. Experimental results demonstrate that VRGA significantly improves both visual grounding accuracy and multi-step reasoning capabilities across multiple vision-language benchmarks, while also enhancing model interpretability through more focused and meaningful attention patterns.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) often suffer from perceptual impairments under extended reasoning modes, particularly in visual question answering (VQA) tasks. We identify attention dispersion as the underlying cause: during multi-step reasoning, the model's visual attention becomes scattered and drifts away from question-relevant regions, effectively "losing focus" on the visual input. To better understand this phenomenon, we analyze the attention maps of MLLMs and observe that reasoning prompts significantly reduce attention to regions critical for answering the question. We further find a strong correlation between the model's overall attention on image tokens and the spatial dispersiveness of its attention within the image. Leveraging this insight, we propose a training-free Visual Region-Guided Attention (VRGA) framework that selects visual heads based on an entropy-focus criterion and reweights their attention, effectively guiding the model to focus on question-relevant regions during reasoning. Extensive experiments on vision-language benchmarks demonstrate that our method effectively alleviates perceptual degradation, leading to improvements in visual grounding and reasoning accuracy while providing interpretable insights into how MLLMs process visual information.
Problem

Research questions and friction points this paper is trying to address.

perceptual impairment
multimodal large language models
visual attention dispersion
visual question answering
reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

perceptual impairment
attention dispersion
visual grounding
multimodal reasoning
training-free attention modulation
🔎 Similar Papers
No similar papers found.
R
Ruiying Peng
Tsinghua Shenzhen International Graduate School
Xueyu Wu
Xueyu Wu
The University of Hong Kong
Distributed ML SystemsFederated Learning
Jing Lei
Jing Lei
Carnegie Mellon University
Probability and Statistics
L
Lu Hou
Huawei Technologies
Y
Yuanzheng Ma
Test Center, National University of Defense Technology
X
Xiaohui Li
Huawei Technologies