MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs

📅 2025-02-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a “localization-accuracy dissociation” phenomenon in multimodal large language models (MLLMs): while their self-attention mechanisms accurately localize small objects in images, downstream reasoning frequently overlooks fine-grained visual details, causing substantial performance degradation on small-object visual question answering (VQA) as object size decreases. To address this, we propose a training-free, plug-and-play visual intervention method that dynamically enhances fine-grained visual representations by fusing self-attention maps with gradient-based saliency maps. Our approach requires no fine-tuning, additional data, or architectural modifications, and is compatible with arbitrary MLLMs. Extensive experiments across two mainstream MLLMs and seven standard VQA benchmarks demonstrate consistent and significant improvements in fine-detail VQA accuracy. The results validate the method’s generality, effectiveness, and zero-training-overhead advantage.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visual details as effectively as large ones when answering questions about images. We observe that their performance is very sensitive to the size of the visual subject of the question, and further show that this effect is in fact causal by conducting an intervention study. Next, we study the attention patterns of MLLMs when answering visual questions, and intriguingly find that they consistently know where to look, even when they provide the wrong answer. Based on these findings, we then propose training-free visual intervention methods that leverage the internal knowledge of any MLLM itself, in the form of attention and gradient maps, to enhance its perception of small visual details. We evaluate our proposed methods on two widely-used MLLMs and seven visual question answering benchmarks and show that they can significantly improve MLLMs' accuracy without requiring any training. Our results elucidate the risk of applying MLLMs to visual recognition tasks concerning small details and indicate that visual intervention using the model's internal state is a promising direction to mitigate this risk.
Problem

Research questions and friction points this paper is trying to address.

MLLMs' perception of small visual details
Impact of visual subject size on MLLMs
Training-free methods to enhance MLLMs' accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free visual intervention methods
Leverage attention and gradient maps
Enhance perception of small details
🔎 Similar Papers
No similar papers found.
J
Jiarui Zhang
University of Southern California, USA
M
Mahyar Khayatkhoei
University of Southern California, USA
P
P. Chhikara
Vrije Universiteit Amsterdam, The Netherlands
Filip Ilievski
Filip Ilievski
Vrije Universiteit Amsterdam; Information Sciences Institute (University of Southern California)
commonsense reasoningneurosymbolic AIanalogyhuman-centric AI