π€ AI Summary
This work re-examines the theoretical properties of Best-of-N (BoN) sampling under more realistic assumptions, using win-rate as the optimization objective. While BoN is widely employed for inference-time alignment, it suffers from statistical suboptimality and vulnerability to reward hacking. The authors propose a simple yet practical variant of BoN that maintains statistical optimality while theoretically eliminating reward hacking altogether. Their analysis reveals that under mild quality conditions on the reference and reward models, standard BoN is already optimal in terms of win-rateβa finding obscured in prior work due to misaligned optimization objectives. The proposed method achieves a favorable balance among computational efficiency, statistical performance, and robustness against reward manipulation.
π Abstract
Best-of-N (BoN) sampling is a widely used inference-time alignment method for language models, whereby N candidate responses are sampled from a reference model and the one with the highest predicted reward according to a learned reward model is selected. Despite its widespread practical use, recent theoretical work has suggested that it is statistically suboptimal and vulnerable to reward hacking, the process by which models exploit weaknesses in the learned reward model to achieve high estimated reward without genuinely improving performance. We revisit this question under assumptions that more closely reflect practice than that of prior work. In particular, in contradistinction to earlier analyses that focused on expected true reward, which may not be meaningful in many practical settings, we investigate how inference-time alignment affects the win-rate, a pairwise comparison-based metric more closely aligned with how reward models are trained and evaluated in practice. We demonstrate that, under minimal conditions on the quality of the reference model and learned reward model, properly tuned BoN is both computationally and statistically optimal in achieving high win-rate, partially explaining its widespread practical success. Because BoN remains susceptible to reward-hacking in this setting, we propose a simple and practical variant that provably eliminates reward-hacking while maintaining optimal statistical performance. Finally, we show that prior approaches are provably suboptimal when considering win-rate, highlighting the importance of choosing appropriate objectives when analyzing inference-time alignment methods.