🤖 AI Summary
This work proposes an efficient, training-free inference optimization method that eliminates the need for external rewards or verifiers. Addressing the high training costs of existing reinforcement learning–based approaches and the computational expense and poor scalability of untrained MCMC sampling, the method leverages theoretical analysis to approximate the global power-law distribution as a per-token low-temperature scaled distribution, which is then autoregressively sharpened during generation. This approach achieves distribution sharpening without iterative MCMC sampling for the first time, matching or surpassing the performance of single-step GRPO on mathematical reasoning, question answering, and code generation tasks while reducing inference latency by over an order of magnitude.
📝 Abstract
Reinforcement learning (RL) post-training is a dominant approach for improving the reasoning performance of large language models (LLMs), yet growing evidence suggests that its gains arise primarily from distribution sharpening rather than the acquisition of new capabilities. Recent work has shown that sampling from the power distribution of LLMs using Markov chain Monte Carlo (MCMC) can recover performance comparable to RL post-training without relying on external rewards; however, the high computational cost of MCMC makes such approaches impractical for widespread adoption. In this work, we propose a theoretically grounded alternative that eliminates the need for iterative MCMC. We derive a novel formulation showing that the global power distribution can be approximated by a token-level scaled low-temperature one, where the scaling factor captures future trajectory quality. Leveraging this insight, we introduce a training-free and verifier-free algorithm that sharpens the base model's generative distribution autoregressively. Empirically, we evaluate our method on math, QA, and code tasks across four LLMs, and show that our method matches or surpasses one-shot GRPO without relying on any external rewards, while reducing inference latency by over 10x compared to MCMC-based sampling.