🤖 AI Summary
This work investigates the conditions under which approximate reward models are effective for scaling inference, specifically when they suffice to support efficient reasoning. Focusing on sequential Monte Carlo (SMC)-based inference frameworks, the study introduces Bellman error as a key metric for evaluating the quality of approximate reward models and theoretically analyzes how this error governs the allocation of computational resources to enhance inference capabilities. The central contribution is a proof that when the Bellman error of an approximate reward model is bounded by O(1/T), the computational complexity of inference over sequences of length T can be reduced from exponential to polynomial, thereby achieving an exponential gain in efficiency.
📝 Abstract
Inference-time scaling has recently emerged as a powerful paradigm for improving the reasoning capability of large language models. Among various approaches, Sequential Monte Carlo (SMC) has become a particularly important framework, enabling iterative generation, evaluation, rejection, and resampling of intermediate reasoning trajectories. A central component in this process is the reward model, which evaluates partial solutions and guides the allocation of computation during inference. However, in practice, true reward models are never available. All deployed systems rely on approximate reward models, raising a fundamental question: Why and when do approximate reward models suffice for effective inference-time scaling? In this work, we provide a theoretical answer. We identify the Bellman error of the approximate reward model as the key quantity governing the effectiveness of SMC-based inference-time scaling. For a reasoning process of length $T$, we show that if the Bellman error of the approximate reward model is bounded by $O(1/T)$, then combining this reward model with SMC reduces the computational complexity of reasoning from exponential in $T$ to polynomial in $T$. This yields an exponential improvement in inference efficiency despite using only approximate rewards.