🤖 AI Summary
Large language models (LLMs) suffer from error accumulation in mathematical reasoning due to autoregressive generation, while existing tree-search verifiers often prune promising partial solutions prematurely because they lack reliable mechanisms to assess incomplete reasoning paths.
Method: We propose a token-level value supervision framework that models cumulative reward as the probability of eventual answer correctness, enabling fine-grained, forward-looking evaluation of reasoning quality at early stages. Our approach trains a probabilistic value model via reinforcement learning, performs token-level supervised fine-tuning, and adapts verifiers on Mistral and Llama architectures.
Contribution/Results: Experiments on GSM8K and MATH demonstrate substantial improvements over step-wise verifiers, significantly boosting mathematical reasoning accuracy. This work introduces, for the first time, token-level value signals into verifier training—overcoming the long-standing bottleneck of unassessable intermediate states in symbolic reasoning.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive problem-solving capabilities in mathematics through step-by-step reasoning chains. However, they are susceptible to reasoning errors that impact the quality of subsequent reasoning chains and the final answer due to language models' autoregressive token-by-token generating nature. Recent works have proposed adopting external verifiers to guide the generation of reasoning paths, but existing works utilize models that have been trained with step-by-step labels to assess the correctness of token-by-token reasoning chains. Consequently, they struggle to recognize discriminative details of tokens within a reasoning path and lack the ability to evaluate whether an intermediate reasoning path is on a promising track toward the correct final answer. To amend the lack of sound and token-grained math-verification signals, we devise a novel training scheme for verifiers that apply token-level supervision with the expected cumulative reward (i.e., value). Furthermore, we propose a practical formulation of the cumulative reward by reducing it to finding the probability of future correctness of the final answer and thereby enabling the empirical estimation of the value. Experimental results on mathematical reasoning benchmarks show that Token-Supervised Value Model (TVM) can outperform step-by-step verifiers on GSM8K and MATH with Mistral and Llama.