🤖 AI Summary
This work addresses the challenge that multimodal large language models often suffer from degraded perception of critical image regions during multi-step reasoning due to visual attention dispersion. For the first time, this study establishes a clear link between such attention dispersion and diminished perceptual performance. To mitigate this issue, the authors propose Visual Region-Guided Attention (VRGA), a training-free mechanism that dynamically reweights attention using an entropy-based focusing criterion to enhance emphasis on question-relevant visual regions. Experimental results demonstrate that VRGA significantly improves both visual grounding accuracy and multi-step reasoning capabilities across multiple vision-language benchmarks, while also enhancing model interpretability through more focused and meaningful attention patterns.
📝 Abstract
Multimodal large language models (MLLMs) often suffer from perceptual impairments under extended reasoning modes, particularly in visual question answering (VQA) tasks. We identify attention dispersion as the underlying cause: during multi-step reasoning, the model's visual attention becomes scattered and drifts away from question-relevant regions, effectively "losing focus" on the visual input. To better understand this phenomenon, we analyze the attention maps of MLLMs and observe that reasoning prompts significantly reduce attention to regions critical for answering the question. We further find a strong correlation between the model's overall attention on image tokens and the spatial dispersiveness of its attention within the image. Leveraging this insight, we propose a training-free Visual Region-Guided Attention (VRGA) framework that selects visual heads based on an entropy-focus criterion and reweights their attention, effectively guiding the model to focus on question-relevant regions during reasoning. Extensive experiments on vision-language benchmarks demonstrate that our method effectively alleviates perceptual degradation, leading to improvements in visual grounding and reasoning accuracy while providing interpretable insights into how MLLMs process visual information.