🤖 AI Summary
This work addresses the challenge that current vision-language models (VLMs) struggle to accurately localize critical evidence regions in tasks requiring fine-grained visual details or multi-region reasoning, such as document understanding. The authors propose a training-free, test-time evidence retrieval method that backpropagates gradients of the output distribution’s entropy to visual token embeddings to generate visual relevance maps. By integrating an iterative zoom-and-refine relocation mechanism with a spatial entropy–based stopping criterion, the approach actively focuses on multiple evidence regions. Evaluated across four mainstream VLM architectures and seven benchmarks, the method consistently improves performance—particularly in high-resolution and detail-sensitive scenarios—and yields more interpretable localization results.
📝 Abstract
Despite rapid progress, pretrained vision-language models still struggle when answers depend on tiny visual details or on combining clues spread across multiple regions, as in documents and compositional queries. We address this by framing grounding as test-time evidence retrieval: given a query, the model should actively identify where to look next to resolve ambiguity. To this end, we propose a training-free, model-intrinsic grounding method that uses uncertainty as supervision. Specifically, we compute the entropy of the model's next-token distribution and backpropagate it to the visual token embeddings to obtain an entropy-gradient relevance map, without auxiliary detectors or attention-map heuristics. We then extract and rank multiple coherent regions to support multi-evidence queries, and introduce an iterative zoom-and-reground procedure with a spatial-entropy stopping rule to avoid over-refinement. Experiments on seven benchmarks across four VLM architectures demonstrate consistent improvements over existing methods, with the largest gains on detail-critical and high-resolution settings, while also producing more interpretable evidence localizations.