Limits of PRM-Guided Tree Search for Mathematical Reasoning with LLMs

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether process reward models (PRMs) can effectively guide tree search to improve large language models’ (LLMs) performance on branching mathematical reasoning tasks. Addressing limitations of chain-of-thought prompting and Best-of-N sampling, we propose an adaptive tree search algorithm that maximizes PRM scores over infeasible action spaces, and systematically compare Monte Carlo tree search against beam search. Empirical evaluation is conducted on 23 diverse mathematical problems using Qwen2.5-Math-7B-Instruct and its associated PRM. Results show that PRM-guided tree search does not significantly outperform Best-of-N; moreover, PRM state-value reliability degrades markedly with increasing reasoning steps, exposing fundamental deficiencies in out-of-distribution generalization and long-horizon credit assignment. Our core contribution is the first empirical demonstration that current PRM architectures are ill-suited for effective tree search, underscoring the urgent need to redesign reward modeling mechanisms for sequential reasoning.

Technology Category

Application Category

📝 Abstract
While chain-of-thought prompting with Best-of-N (BoN) selection has become popular for mathematical reasoning in large language models (LLMs), its linear structure fails to capture the branching and exploratory nature of complex problem-solving. In this work, we propose an adaptive algorithm to maximize process reward model (PRM) scores over the intractable action space, and investigate whether PRM-guided tree search can improve mathematical reasoning by exploring multiple partial solution paths. Across $23$ diverse mathematical problems using Qwen2.5-Math-7B-Instruct with its associated PRM as a case study, we find that: (1) PRM-guided tree search shows no statistically significant improvements over BoN despite higher costs, (2) Monte Carlo tree search and beam search outperform other PRM-guided tree search methods, (3) PRMs poorly approximate state values and their reliability degrades with reasoning depth, and (4) PRMs generalize poorly out of distribution. This underperformance stems from tree search's greater reliance on unreliable PRM scores, suggesting different reward modeling is necessary before tree search can effectively enhance mathematical reasoning in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Investigating PRM-guided tree search for mathematical reasoning
Evaluating tree search performance against Best-of-N selection
Analyzing PRM reliability limitations in multi-step reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive algorithm maximizes PRM scores
Tree search explores multiple solution paths
Monte Carlo and beam search outperform methods
🔎 Similar Papers
No similar papers found.