🤖 AI Summary
This work addresses the challenge that multimodal large language models often struggle with precise localization and reasoning in fine-grained visual question answering due to low-resolution inputs and attention noise. To overcome this, the authors propose a training-free visual cropping method that first introduces an OCR-based diagnostic task to identify attention heads with genuine visual grounding capabilities. They then construct a cropping guidance map by integrating spatial entropy—measuring regional concentration—and gradient sensitivity—assessing each region’s contribution to the prediction—to extract task-relevant sub-images. Evaluated on multiple fine-grained VQA benchmarks, the proposed approach significantly outperforms existing cropping strategies, achieving more accurate region localization and enhanced visual grounding without any additional training.
📝 Abstract
Multimodal Large Language Models (MLLMs) show strong performance in Visual Question Answering (VQA) but remain limited in fine-grained reasoning due to low-resolution inputs and noisy attention aggregation. We propose \textbf{Head Aware Visual Cropping (HAVC)}, a training-free method that improves visual grounding by leveraging a selectively refined subset of attention heads. HAVC first filters heads through an OCR-based diagnostic task, ensuring that only those with genuine grounding ability are retained. At inference, these heads are further refined using spatial entropy for stronger spatial concentration and gradient sensitivity for predictive contribution. The fused signals produce a reliable Visual Cropping Guidance Map, which highlights the most task-relevant region and guides the cropping of a subimage subsequently provided to the MLLM together with the image-question pair. Extensive experiments on multiple fine-grained VQA benchmarks demonstrate that HAVC consistently outperforms state-of-the-art cropping strategies, achieving more precise localization, stronger visual grounding, providing a simple yet effective strategy for enhancing precision in MLLMs.