Robust Preference Optimization: Aligning Language Models with Noisy Preference Feedback

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional preference alignment methods (e.g., RLHF, DPO) assume homogeneous, noise-free human preferences—yet real-world preferences are heterogeneous and annotations are often erroneous, leading to model misalignment. To address this, we propose Robust Preference Optimization (RPO), an EM-based framework that dynamically infers the posterior probability of label correctness and adaptively reweights preference losses. RPO is algorithm-agnostic, enabling plug-and-play robustification of any preference learning method (e.g., DPO, IPO). We theoretically prove that, under well-calibrated models, RPO converges to the true noise level. Experiments on Mistral and Llama 3 demonstrate that RPO achieves up to +7.0% and +5.4% win-rate improvements on AlpacaEval 2 and Arena-Hard, respectively—outperforming all baselines. RPO thus provides a general, theoretically grounded solution for reliable LLM alignment under noisy and heterogeneous preference data.

Technology Category

Application Category

📝 Abstract
Standard human preference-based alignment methods, such as Reinforcement Learning from Human Feedback (RLHF), are a cornerstone technology for aligning Large Language Models (LLMs) with human values. However, these methods are all underpinned by a critical, yet flawed assumption: human preferences are homogeneous (representing a single, unified preference) and the collected data is noiseless (free from error). In reality, neither is true since human preference is pluralistic and annotators can make mistakes. This creates a discrepancy between the recorded data and the ground-truth preferences, which can misguide the model and degrade its performance. To address this challenge, we introduce Robust Preference Optimization (RPO). RPO employs an Expectation-Maximization (EM) algorithm to infer the posterior probability of each label's correctness, which is used to adaptively re-weigh each data point in the training loss to mitigate noise. We further generalize this approach by establishing a theoretical link between arbitrary preference losses and their corresponding probabilistic models. This generalization enables the systematic transformation of existing alignment algorithms into their robust counterparts, elevating RPO from a specific algorithm to a meta-framework for robust preference alignment. Theoretically, we prove that under the condition of a perfectly calibrated model, RPO is guaranteed to converge to the true noise level of the dataset. Our experiments demonstrate RPO's effectiveness as a meta-framework, consistently enhancing four state-of-the-art alignment algorithms (DPO, IPO, SimPO, and CPO). When applied to Mistral and Llama 3 models, the RPO-enhanced methods achieve substantial win rate gains on AlpacaEval 2 and Arena-Hard, with improvements of up to 7.0% and 5.4%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Addressing noisy preference feedback in language model alignment
Mitigating annotation errors in human preference data
Generalizing robust optimization for existing alignment algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses EM algorithm to infer label correctness probabilities
Adaptively re-weights training data to mitigate noise
Transforms existing alignment algorithms into robust versions
🔎 Similar Papers
No similar papers found.
X
Xiaoyang Cao
Massachusetts Institute of Technology
Zelai Xu
Zelai Xu
PhD Student, Tsinghua University
Language AgentReinforcenment LearningMulti-Agent System
M
Mo Guang
Li Auto Inc.
K
Kaiwen Long
Li Auto Inc.
M
Michiel A. Bakker
Massachusetts Institute of Technology
Y
Yu Wang
Tsinghua University
C
Chao Yu
Tsinghua University, Zhongguancun Academy