๐ค AI Summary
This work addresses the lack of theoretical guidance in existing test-time scaling methodsโsuch as Best-of-Nโfor efficiently improving large language model (LLM) reasoning performance under computational constraints. The authors propose Scaling-Law Guided (SLG) Search, a novel algorithm that, for the first time, models test-time scaling behavior through the tail characteristics of the reward distribution. Leveraging this insight, SLG dynamically allocates computational resources to prioritize exploration of high-potential intermediate states. Integrating tail distribution estimation, scaling law prediction, and an adaptive search strategy, the method enjoys asymptotic no-regret guarantees. Empirical results demonstrate that, under identical computational budgets, SLG Search significantly outperforms Best-of-N, achieving expected rewards comparable to those attainable only with polynomially more computation, with consistent gains validated across multiple LLMs and reward models.
๐ Abstract
Test-time scaling has emerged as a critical avenue for enhancing the reasoning capabilities of Large Language Models (LLMs). Though the straight-forward''best-of-$N$''(BoN) strategy has already demonstrated significant improvements in performance, it lacks principled guidance on the choice of $N$, budget allocation, and multi-stage decision-making, thereby leaving substantial room for optimization. While many works have explored such optimization, rigorous theoretical guarantees remain limited. In this work, we propose new methodologies to predict and improve scaling properties via tail-guided search. By estimating the tail distribution of rewards, our method predicts the scaling law of LLMs without the need for exhaustive evaluations. Leveraging this prediction tool, we introduce Scaling-Law Guided (SLG) Search, a new test-time algorithm that dynamically allocates compute to identify and exploit intermediate states with the highest predicted potential. We theoretically prove that SLG achieves vanishing regret compared to perfect-information oracles, and achieves expected rewards that would otherwise require a polynomially larger compute budget required when using BoN. Empirically, we validate our framework across different LLMs and reward models, confirming that tail-guided allocation consistently achieves higher reward yields than Best-of-$N$ under identical compute budgets. Our code is available at https://github.com/PotatoJnny/Scaling-Law-Guided-search.