Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language model decoding methods lack a unified theoretical framework and often rely on heuristic hyperparameter tuning. This work formulates the decoding process as a regularized optimization problem over the probability simplex and provides a unified interpretation of multiple mainstream decoding strategies through the introduction of KL-divergence anchoring and analysis of optimality conditions. Building upon this framework, we propose a novel Best-of-K sampler that substantially improves generation quality under high-temperature settings. Experimental results demonstrate that our method achieves an 18.6% absolute accuracy gain on the MATH500 benchmark when applied to the Qwen2.5-Math-7B model, confirming its effectiveness and broad applicability.

Technology Category

Application Category

📝 Abstract
Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on MATH500 at high sampling temperatures.
Problem

Research questions and friction points this paper is trying to address.

decoding
language models
optimization
probability simplex
sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

decoding as optimization
probability simplex
Best-of-K sampling
KL-anchored coverage
structured decoding
🔎 Similar Papers
No similar papers found.