🤖 AI Summary
This work addresses the limitation of non-reasoning, deployed vision-language models (VLMs) by proposing a test-time inference framework that elicits their latent long-horizon reasoning capabilities—without fine-tuning or supervision. Methodologically, it introduces Monte Carlo Tree Search (MCTS) to vision reasoning for the first time, treating self-generated subproblems as latent variables to construct reasoning trajectories; it integrates dynamic subproblem generation, chain-of-thought prompt injection, and multi-step vision–language scheduling, thereby modeling reasoning as an implicit decision search process. The core contribution is an emergent enhancement of reasoning capability at zero training cost. Empirically, the method achieves a +2% absolute accuracy gain on MMMU-PRO overall, a substantial +9% improvement on Fine Arts tasks, and consistent performance gains across three major benchmarks.
📝 Abstract
Recent research in vision-language models (VLMs) has centered around the possibility of equipping them with implicit long-form chain-of-thought reasoning -- akin to the success observed in language models -- via distillation and reinforcement learning. But what about the non-reasoning models already trained and deployed across the internet? Should we simply abandon them, or is there hope for a search mechanism that can elicit hidden knowledge and induce long reasoning traces -- without any additional training or supervision? In this paper, we explore this possibility using a Monte Carlo Tree Search (MCTS)-inspired algorithm, which injects subquestion-subanswer pairs into the model's output stream. We show that framing reasoning as a search process -- where subquestions act as latent decisions within a broader inference trajectory -- helps the model"connect the dots"between fragmented knowledge and produce extended reasoning traces in non-reasoning models. We evaluate our method across three benchmarks and observe consistent improvements. Notably, our approach yields a 2% overall improvement on MMMU-PRO, including a significant 9% gain in Liberal Arts.