Guided Speculative Inference for Efficient Test-Time Alignment of LLMs

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low reward-alignment efficiency and high decoding overhead of large language models (LLMs) during inference, this paper proposes Guided Speculative Inference (GSI)—a novel framework integrating auxiliary-model-based speculative sampling, reward modeling, and a soft-optimal *n*-best strategy. GSI is the first method to theoretically establish that its sampling policy approximates the soft-optimal tilted distribution, and it derives an explicit upper bound on the KL divergence between the induced and target distributions. Crucially, GSI achieves dual-model collaborative sampling (π<sub>B</sub> + π<sub>S</sub>) without imposing additional computational burden on the primary LLM. Experiments on rigorous reasoning benchmarks—including MATH500 and OlympiadBench—demonstrate that GSI significantly outperforms standard soft-optimal *n*-best sampling and reward-guided speculative decoding. Notably, in certain configurations, it even surpasses soft-optimal *n*-best sampling relying solely on the base model, thereby validating both the theoretical soundness and practical efficacy of the proposed design.

Technology Category

Application Category

📝 Abstract
We propose Guided Speculative Inference (GSI), a novel algorithm for efficient reward-guided decoding in large language models. GSI combines soft best-of-$n$ test-time scaling with a reward model $r(x,y)$ and speculative samples from a small auxiliary model $pi_S(ymid x)$. We provably approximate the optimal tilted policy $pi_{eta,B}(ymid x) propto pi_B(ymid x)exp(eta,r(x,y))$ of soft best-of-$n$ under the primary model $pi_B$. We derive a theoretical bound on the KL divergence between our induced distribution and the optimal policy. In experiments on reasoning benchmarks (MATH500, OlympiadBench, Minerva Math), our method achieves higher accuracy than standard soft best-of-$n$ with $pi_S$ and reward-guided speculative decoding (Liao et al., 2025), and in certain settings even outperforms soft best-of-$n$ with $pi_B$. The code is available at https://github.com/j-geuter/GSI .
Problem

Research questions and friction points this paper is trying to address.

Efficient reward-guided decoding for large language models
Approximating optimal tilted policy for soft best-of-n
Improving accuracy in reasoning benchmarks with GSI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines soft best-of-n with reward model
Uses speculative samples from auxiliary model
Approximates optimal tilted policy theoretically
🔎 Similar Papers
No similar papers found.