๐ค AI Summary
This work addresses the challenges of inaccurate region localization and limited perceptual performance in multimodal large language models for visual question answering, which often stem from attention dispersion and redundant textual prompts. To mitigate these issues, the authors propose a focus-attention-driven reasoning framework that aggregates cross-layer attention into a single intermediate layer and replaces the full question with concise semantic cues to guide salient region extraction. This approach effectively reduces semantic noise and enhances localization consistency. By integrating attention aggregation, semantic-guided salient region mining, and a scaling mechanism, the method enables end-to-end training and achieves significant improvements in fine-grained visual understanding across five visual question answering benchmarks, demonstrating both its effectiveness and generalizability.
๐ Abstract
Thinking with Images improves fine-grained VQA for MLLMs by emphasizing visual cues. However, tool-augmented methods depend on the capacity of grounding, which remains unreliable for MLLMs. In parallel, attention-driven methods to crop the Region of Interest (ROIs) are proposed but they are constrained by (1) fragmented attention signals scattered across layers, leading to suboptimal localization and (2) relying on question- or redundant-text-conditioned attention extraction. Our analysis reveals three patterns: MLLMs may attend to the correct region yet generate incorrect coordinates, where-to-look attention is often fragmented across layers, and attention extraction is query-sensitive. Motivated by these, We propose ConFoThinking, a Consolidated-Focused-Attention-Driven Thinking framework that learns to aggregate attention into a designated intermediate layer, from which we mine and zoom in salient regions for downstream visual understanding. Moreover, we extract attention using concise semantic cues of what to look into, which mitigates the semantic noise introduced by question- or redundant-text-based attention extraction. Experiments across five VQA benchmarks demonstrate ConFoThinking significantly improves perception performance. The code, checkpoints, and dataset will be released after being accepted.