🤖 AI Summary
Multimodal large language models (MLLMs) excel at reasoning but exhibit weak perception of fine-grained visual details, limiting their performance on tasks requiring high-precision visual understanding. While existing methods that crop salient regions improve local fidelity, they induce “context blindness”: the structural disjunction between high-fidelity crops and the global original image stems from insufficient input structural diversity—not inadequate information volume. To address this, we propose a training-free two-stage visual funnel framework. It introduces two novel mechanisms: (i) *context anchoring*, which locates cropping centers adaptively via attention entropy; and (ii) *entropy-scaled composition*, which dynamically modulates crop scales and hierarchically fuses local details with global context. Evaluated on multiple fine-grained visual understanding benchmarks, our method significantly outperforms single-crop and unordered multi-crop baselines, demonstrating that hierarchical, structurally diverse inputs are critical for mitigating context blindness.
📝 Abstract
Multimodal Large Language Models (MLLMs) demonstrate impressive reasoning capabilities, but often fail to perceive fine-grained visual details, limiting their applicability in precision-demanding tasks. While methods that crop salient regions of an image offer a partial solution, we identify a critical limitation they introduce: "Contextual Blindness". This failure occurs due to structural disconnect between high-fidelity details (from the crop) and the broader global context (from the original image), even when all necessary visual information is present. We argue that this limitation stems not from a lack of information 'Quantity', but from a lack of 'Structural Diversity' in the model's input. To resolve this, we propose Visual Funnel, a training-free, two-step approach. Visual Funnel first performs Contextual Anchoring to identify the region of interest in a single forward pass. It then constructs an Entropy-Scaled Portfolio that preserves the hierarchical context - ranging from focal detail to broader surroundings - by dynamically determining crop sizes based on attention entropy and refining crop centers. Through extensive experiments, we demonstrate that Visual Funnel significantly outperforms naive single-crop and unstructured multi-crop baselines. Our results further validate that simply adding more unstructured crops provides limited or even detrimental benefits, confirming that the hierarchical structure of our portfolio is key to resolving Contextual Blindness.