🤖 AI Summary
This work addresses the vulnerability of reward models in reinforcement learning from human feedback to annotation noise and systematic biases—such as those induced by response length or stylistic features—which can lead to reward hacking. To mitigate this, the authors propose the Bayesian Non-negative Reward Model (BNRM), which integrates non-negative factor analysis into the Bradley-Terry preference model. BNRM employs instance-level latent variables to disentangle reward representations and leverages sparsity in global latent factors for implicit debiasing, thereby constructing an uncertainty-aware, robust reward learning framework. Its two-stage “disentangle–debias” architecture, combined with amortized variational inference, significantly alleviates reward over-optimization while enhancing robustness under distributional shift and improving interpretability, outperforming existing baselines.
📝 Abstract
Reward models learned from human preferences are central to aligning large language models (LLMs) via reinforcement learning from human feedback, yet they are often vulnerable to reward hacking due to noisy annotations and systematic biases such as response length or style. We propose Bayesian Non-Negative Reward Model (BNRM), a principled reward modeling framework that integrates non-negative factor analysis into Bradley-Terry (BT) preference model. BNRM represents rewards through a sparse, non-negative latent factor generative process that operates at two complementary levels: instance-specific latent variables induce disentangled reward representations, while sparsity over global latent factors acts as an implicit debiasing mechanism that suppresses spurious correlations. Together, this disentanglement-then-debiasing structure enables robust uncertainty-aware reward learning. To scale BNRM to modern LLMs, we develop an amortized variational inference network conditioned on deep model representations, allowing efficient end-to-end training. Extensive empirical results demonstrate that BNRM substantially mitigates reward over-optimization, improves robustness under distribution shifts, and yields more interpretable reward decompositions than strong baselines.