Head-Aware Visual Cropping: Enhancing Fine-Grained VQA with Attention-Guided Subimage

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that multimodal large language models often struggle with precise localization and reasoning in fine-grained visual question answering due to low-resolution inputs and attention noise. To overcome this, the authors propose a training-free visual cropping method that first introduces an OCR-based diagnostic task to identify attention heads with genuine visual grounding capabilities. They then construct a cropping guidance map by integrating spatial entropy—measuring regional concentration—and gradient sensitivity—assessing each region’s contribution to the prediction—to extract task-relevant sub-images. Evaluated on multiple fine-grained VQA benchmarks, the proposed approach significantly outperforms existing cropping strategies, achieving more accurate region localization and enhanced visual grounding without any additional training.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) show strong performance in Visual Question Answering (VQA) but remain limited in fine-grained reasoning due to low-resolution inputs and noisy attention aggregation. We propose \textbf{Head Aware Visual Cropping (HAVC)}, a training-free method that improves visual grounding by leveraging a selectively refined subset of attention heads. HAVC first filters heads through an OCR-based diagnostic task, ensuring that only those with genuine grounding ability are retained. At inference, these heads are further refined using spatial entropy for stronger spatial concentration and gradient sensitivity for predictive contribution. The fused signals produce a reliable Visual Cropping Guidance Map, which highlights the most task-relevant region and guides the cropping of a subimage subsequently provided to the MLLM together with the image-question pair. Extensive experiments on multiple fine-grained VQA benchmarks demonstrate that HAVC consistently outperforms state-of-the-art cropping strategies, achieving more precise localization, stronger visual grounding, providing a simple yet effective strategy for enhancing precision in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Visual Question Answering
Fine-Grained Reasoning
Multimodal Large Language Models
Visual Grounding
Attention Aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Head-Aware Visual Cropping
Fine-Grained VQA
Attention Head Selection
Visual Grounding
Training-Free Cropping
🔎 Similar Papers
No similar papers found.
J
Junfei Xie
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China; University of Science and Technology of China, Hefei, China
P
Peng Pan
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China; University of Science and Technology of China, Hefei, China
Xulong Zhang
Xulong Zhang
Ping An Technology (Shenzhen) Co., Ltd.
Federated Large ModelsTrusted ComputingGraph Computing