When to Think and When to Look: Uncertainty-Guided Lookback

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of large vision-language models (LVLMs) in visual reasoning caused by excessively prolonged reasoning chains that neglect image content. We propose an uncertainty-guided dynamic re-reading mechanism that requires no model fine-tuning. Our method estimates reasoning uncertainty in real time and triggers adaptive image re-examination at critical steps, integrating multi-step decoding with beam search for controllable inference. We systematically characterize the dynamic visual dependency inherent in LVLM reasoning—revealing, for the first time, how visual grounding requirements evolve across reasoning steps—and introduce a novel “when to reason, when to look” collaborative decision-making paradigm. Evaluated on InternVL3.5 and Qwen3-VL, our approach achieves new state-of-the-art performance on MMMU-val, significantly improving accuracy on challenging categories such as spatial reasoning and counting, and demonstrates strong generalization across five additional benchmarks.

Technology Category

Application Category

📝 Abstract
Test-time thinking (that is, generating explicit intermediate reasoning chains) is known to boost performance in large language models and has recently shown strong gains for large vision language models (LVLMs). However, despite these promising results, there is still no systematic analysis of how thinking actually affects visual reasoning. We provide the first such analysis with a large scale, controlled comparison of thinking for LVLMs, evaluating ten variants from the InternVL3.5 and Qwen3-VL families on MMMU-val under generous token budgets and multi pass decoding. We show that more thinking is not always better; long chains often yield long wrong trajectories that ignore the image and underperform the same models run in standard instruct mode. A deeper analysis reveals that certain short lookback phrases, which explicitly refer back to the image, are strongly enriched in successful trajectories and correlate with better visual grounding. Building on this insight, we propose uncertainty guided lookback, a training free decoding strategy that combines an uncertainty signal with adaptive lookback prompts and breadth search. Our method improves overall MMMU performance, delivers the largest gains in categories where standard thinking is weak, and outperforms several strong decoding baselines, setting a new state of the art under fixed model families and token budgets. We further show that this decoding strategy generalizes, yielding consistent improvements on five additional benchmarks, including two broad multimodal suites and math focused visual reasoning datasets.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how thinking affects visual reasoning in large vision language models
Addressing long wrong reasoning chains that ignore visual information
Improving visual grounding through uncertainty-guided adaptive lookback strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-guided lookback decoding strategy
Training-free method with adaptive prompts
Breadth search combined with uncertainty signals
🔎 Similar Papers
No similar papers found.