Optimal Policy Minimum Bayesian Risk

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low decoding accuracy, poor robustness, and high computational cost of large language models (LLMs) on complex reasoning tasks. We propose an enhanced Minimum Bayes Risk Decoding (MBRD) framework that jointly incorporates reward, risk, and similarity signals. Our method’s core innovations are threefold: (i) the first integration of the optimal policy framework from KL-constrained reinforcement learning into MBRD, enabling dynamic, inference-time weighting of heterogeneous signals; (ii) a sample-size-adaptive sampling mechanism that eliminates reliance on fixed majority voting; and (iii) theoretical guarantees of asymptotic optimality. Crucially, our approach requires no additional model training. On MATH-500 and HumanEval, it significantly outperforms baselines—including Best-of-N and majority voting—while achieving superior accuracy–computation trade-offs. Empirical results validate its advantages in sample efficiency, generalization, and robustness.

Technology Category

Application Category

📝 Abstract
Inference scaling can help LLMs solve complex reasoning problems through extended runtime computation. On top of targeted supervision for long chain-of-thought (long-CoT) generation, purely inference-time techniques such as best-of-N (BoN) sampling, majority voting, or more generally, minimum Bayes risk decoding (MBRD), can further improve LLM accuracy by generating multiple candidate solutions and aggregating over them. These methods typically leverage additional signals in the form of reward models and risk/similarity functions that compare generated samples, e.g., exact match in some normalized space or standard similarity metrics such as Rouge. Here we present a novel method for incorporating reward and risk/similarity signals into MBRD. Based on the concept of optimal policy in KL-controlled reinforcement learning, our framework provides a simple and well-defined mechanism for leveraging such signals, offering several advantages over traditional inference-time methods: higher robustness, improved accuracy, and well-understood asymptotic behavior. In addition, it allows for the development of a sample-efficient variant of MBRD that can adjust the number of samples to generate according to the difficulty of the problem, without relying on majority vote counts. We empirically demonstrate the advantages of our approach on math (MATH-$500$) and coding (HumanEval) tasks using recent open-source models. We also present a comprehensive analysis of its accuracy-compute trade-offs.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM accuracy via optimal policy MBRD
Enhancing robustness and accuracy in reasoning tasks
Optimizing sample efficiency for problem difficulty
Innovation

Methods, ideas, or system contributions that make the work stand out.

KL-controlled reinforcement learning for MBRD
Sample-efficient MBRD adjusts generation dynamically
Integrates reward and similarity signals robustly
🔎 Similar Papers
No similar papers found.