π€ AI Summary
This work addresses the tendency of large multimodal models to over-rely on textual priors during inference, often neglecting visual information and thereby degrading performance on vision-centric tasks. To mitigate this issue, the authors propose VisRef, a framework that dynamically reinjects a semantically relevant, diverse, and globally representative subset of visual tokens during test time, steering the modelβs attention toward salient image content for more reliable multimodal reasoning. Notably, VisRef requires no additional training or reinforcement learning fine-tuning, achieving performance gains solely through a test-time visual refocusing mechanism. Under a fixed computational budget, VisRef outperforms existing methods by up to 6.4% across three visual reasoning benchmarks, demonstrating a favorable balance between efficiency and effectiveness.
π Abstract
Advances in large reasoning models have shown strong performance on complex reasoning tasks by scaling test-time compute through extended reasoning. However, recent studies observe that in vision-dependent tasks, extended textual reasoning at inference time can degrade performance as models progressively lose attention to visual tokens and increasingly rely on textual priors alone. To address this, prior works use reinforcement learning (RL)-based fine-tuning to route visual tokens or employ refocusing mechanisms during reasoning. While effective, these methods are computationally expensive, requiring large-scale data generation and policy optimization. To leverage the benefits of test-time compute without additional RL fine-tuning, we propose VisRef, a visually grounded test-time scaling framework. Our key idea is to actively guide the reasoning process by re-injecting a coreset of visual tokens that are semantically relevant to the reasoning context while remaining diverse and globally representative of the image, enabling more grounded multi-modal reasoning. Experiments on three visual reasoning benchmarks with state-of-the-art multi-modal large reasoning models demonstrate that, under fixed test-time compute budgets, VisRef consistently outperforms existing test-time scaling approaches by up to 6.4%.