Direct Preference Optimization with Rating Information: Practical Algorithms and Provable Gains

📅 2026-01-31
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation of existing Direct Preference Optimization (DPO) methods, which rely solely on pairwise preference signals and neglect the quantitative differences in response quality, leading to ambiguous training signals and suboptimal optimization efficiency. To overcome this, we propose a novel preference optimization algorithm that, for the first time, incorporates explicit score gaps into the DPO framework. Our approach preserves the advantage of not requiring an explicit reward model while leveraging fine-grained relative quality information to enhance alignment. By designing a loss function that accounts for score differences, the method enjoys faster theoretical statistical convergence and demonstrates robustness to scoring noise. Extensive experiments show consistent and significant improvements over current DPO variants across multiple large language models and evaluation benchmarks, with stable performance gains even when provided with inaccurate scores.

Technology Category

Application Category

📝 Abstract
The class of direct preference optimization (DPO) algorithms has emerged as a promising approach for solving the alignment problem in foundation models. These algorithms work with very limited feedback in the form of pairwise preferences and fine-tune models to align with these preferences without explicitly learning a reward model. While the form of feedback used by these algorithms makes the data collection process easy and relatively more accurate, its ambiguity in terms of the quality of responses could have negative implications. For example, it is not clear if a decrease (increase) in the likelihood of preferred (dispreferred) responses during the execution of these algorithms could be interpreted as a positive or negative phenomenon. In this paper, we study how to design algorithms that can leverage additional information in the form of rating gap, which informs the learner how much the chosen response is better than the rejected one. We present new algorithms that can achieve faster statistical rates than DPO in presence of accurate rating gap information. Moreover, we theoretically prove and empirically show that the performance of our algorithms is robust to inaccuracy in rating gaps. Finally, we demonstrate the solid performance of our methods in comparison to a number of DPO-style algorithms across a wide range of LLMs and evaluation benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Direct Preference Optimization
Preference Feedback
Rating Information
Alignment Problem
Foundation Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization
Rating Gap
Statistical Convergence Rate
Robust Alignment
Preference Learning
🔎 Similar Papers
No similar papers found.