Socratic-MCTS: Test-Time Visual Reasoning by Asking the Right Questions

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of non-reasoning, deployed vision-language models (VLMs) by proposing a test-time inference framework that elicits their latent long-horizon reasoning capabilities—without fine-tuning or supervision. Methodologically, it introduces Monte Carlo Tree Search (MCTS) to vision reasoning for the first time, treating self-generated subproblems as latent variables to construct reasoning trajectories; it integrates dynamic subproblem generation, chain-of-thought prompt injection, and multi-step vision–language scheduling, thereby modeling reasoning as an implicit decision search process. The core contribution is an emergent enhancement of reasoning capability at zero training cost. Empirically, the method achieves a +2% absolute accuracy gain on MMMU-PRO overall, a substantial +9% improvement on Fine Arts tasks, and consistent performance gains across three major benchmarks.

Technology Category

Application Category

📝 Abstract
Recent research in vision-language models (VLMs) has centered around the possibility of equipping them with implicit long-form chain-of-thought reasoning -- akin to the success observed in language models -- via distillation and reinforcement learning. But what about the non-reasoning models already trained and deployed across the internet? Should we simply abandon them, or is there hope for a search mechanism that can elicit hidden knowledge and induce long reasoning traces -- without any additional training or supervision? In this paper, we explore this possibility using a Monte Carlo Tree Search (MCTS)-inspired algorithm, which injects subquestion-subanswer pairs into the model's output stream. We show that framing reasoning as a search process -- where subquestions act as latent decisions within a broader inference trajectory -- helps the model"connect the dots"between fragmented knowledge and produce extended reasoning traces in non-reasoning models. We evaluate our method across three benchmarks and observe consistent improvements. Notably, our approach yields a 2% overall improvement on MMMU-PRO, including a significant 9% gain in Liberal Arts.
Problem

Research questions and friction points this paper is trying to address.

Enabling non-reasoning VLMs to perform long reasoning traces without retraining
Using MCTS to elicit hidden knowledge via subquestion-subanswer pairs
Improving fragmented knowledge connection in deployed vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

MCTS-inspired algorithm for visual reasoning
Subquestion-subanswer pairs enhance reasoning
Search process connects fragmented knowledge
🔎 Similar Papers
No similar papers found.