🤖 AI Summary
Existing RLHF methods rely on the Bradley–Terry reward model, whose strong parametric assumptions fail to capture the complexity and noise inherent in real human preferences, leading to reward misspecification and policy degradation. This work proposes a robust RLHF framework addressing these limitations. First, it theoretically unifies variance reduction for both reward estimation and policy gradient estimation, yielding a significantly tightened regret bound. Second, it abandons the Bradley–Terry assumption and instead incorporates robust statistical estimation with explicit bias–variance trade-off analysis, enabling effective modeling of heterogeneous preference data and label noise. Third, empirical evaluation on the Anthropic Helpful and Harmless benchmark demonstrates that 77–81% of the framework’s responses outperform those of baseline methods, concurrently improving alignment performance and generalization stability. The approach thus advances RLHF by enhancing both theoretical rigor and practical robustness to real-world preference data.
📝 Abstract
Reinforcement learning from human feedback (RLHF) has emerged as a key technique for aligning the output of large language models (LLMs) with human preferences. To learn the reward function, most existing RLHF algorithms use the Bradley-Terry model, which relies on assumptions about human preferences that may not reflect the complexity and variability of real-world judgments. In this paper, we propose a robust algorithm to enhance the performance of existing approaches under such reward model misspecifications. Theoretically, our algorithm reduces the variance of reward and policy estimators, leading to improved regret bounds. Empirical evaluations on LLM benchmark datasets demonstrate that the proposed algorithm consistently outperforms existing methods, with 77-81% of responses being favored over baselines on the Anthropic Helpful and Harmless dataset.