Regularized Best-of-N Sampling with Minimum Bayes Risk Objective for Language Model Alignment

📅 2024-04-01
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address reward hacking caused by inaccurate reward models, this paper proposes MBR-BoN: a method that incorporates Minimum Bayes Risk (MBR) as an explicit proximity regularizer between candidate responses and a reference policy within Best-of-N (BoN) sampling, thereby enhancing robust alignment with human preferences during decoding. This constitutes the first application of the MBR objective as an explicit regularizer in BoN sampling. Theoretical analysis and empirical evaluation demonstrate that MBR-BoN effectively mitigates reward model over-optimization while preserving interpretability and generalizability. On the AlpacaFarm and HH-RLHF benchmarks, MBR-BoN significantly outperforms both standard BoN and pure MBR decoding. Moreover, preference datasets generated via MBR-BoN yield substantial improvements in downstream preference model performance when used for DPO training.

Technology Category

Application Category

📝 Abstract
Best-of-N (BoN) sampling with a reward model has been shown to be an effective strategy for aligning Large Language Models (LLMs) to human preferences at the time of decoding. BoN sampling is susceptible to a problem known as reward hacking when the accuracy of the reward model is not high enough due to the quality or the quantity of the preference dataset. Because the reward model is an imperfect proxy for the true objective, over-optimizing its value can compromise its performance on the true objective. In this research, we propose MBR-BoN, a variant of BoN that aims to mitigate reward hacking at inference time by incorporating the Minimum Bayes Risk (MBR) objective as a proximity regularization term. We show empirically and analytically that the MBR objective quantifies the proximity of the response to the reference policy, serving as a proximity regularizer. We evaluate MBR-BoN on the AlpacaFarm and Anthropic's hh-rlhf datasets and show that it outperforms both BoN sampling and MBR decoding. We also evaluate MBR-BoN to generate a pairwise preference learning dataset for Direct Preference Optimization (DPO). Empirical results show that models trained on a dataset generated with MBR-BoN outperform those with vanilla BoN. Our code is available at https://github.com/CyberAgentAILab/regularized-bon
Problem

Research questions and friction points this paper is trying to address.

Reward Modeling
Best-of-N Sampling
Reward Hacking
Innovation

Methods, ideas, or system contributions that make the work stand out.

MBR-BoN
Large Language Models
Reward Model Accuracy
🔎 Similar Papers
No similar papers found.