What Matters in Data for DPO?

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the distribution of preference data affects the performance of Direct Preference Optimization (DPO). We analyze DPO through theoretical modeling, online DPO analysis, and multi-task empirical experiments. Our findings reveal that the quality of chosen responses is the dominant factor determining DPO effectiveness, whereas the quality of rejected responses has negligible impact; contrastive preference signals primarily improve performance by enhancing chosen-response quality. Building on this insight, we formally prove that online DPO is theoretically equivalent to supervised fine-tuning using only chosen responses—thereby providing the first rigorous explanation for the empirical success of practical strategies such as rejection sampling and quality filtering. Experimental results confirm that consistently improving chosen-response quality leads to stable performance gains, underscoring the central importance of high-quality chosen data in preference learning.

Technology Category

Application Category

📝 Abstract
Direct Preference Optimization (DPO) has emerged as a simple and effective approach for aligning large language models (LLMs) with human preferences, bypassing the need for a learned reward model. Despite its growing adoption, a fundamental question remains open: what characteristics of preference data are most critical for DPO performance? In this work, we provide a systematic study of how preference data distribution influences DPO, from both theoretical and empirical perspectives. We show that the quality of chosen responses plays a dominant role in optimizing the DPO objective, while the quality of rejected responses may have relatively limited impact. Our theoretical analysis characterizes the optimal response distribution under DPO and reveals how contrastiveness between responses helps primarily by improving the chosen samples. We further study an online DPO setting and show it effectively reduces to supervised fine-tuning on the chosen responses. Extensive experiments across diverse tasks confirm our findings: improving the quality of chosen responses consistently boosts performance regardless of the quality of the rejected responses. We also investigate the benefit of mixing the on-policy data. Our results interpret the mechanism behind some widely adopted strategies and offer practical insights for constructing high-impact preference datasets for LLM alignment.
Problem

Research questions and friction points this paper is trying to address.

Identifying key characteristics of preference data for DPO effectiveness
Analyzing how chosen and rejected response quality impacts DPO performance
Investigating optimal data distribution strategies for LLM alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focus on chosen response quality
Contrastiveness improves chosen samples
Online DPO reduces to supervised fine-tuning
🔎 Similar Papers
No similar papers found.
Y
Yu Pan
University of Sydney
Z
Zhongze Cai
Imperial College London
G
Guanting Chen
University of North Carolina at Chapel Hill
Huaiyang Zhong
Huaiyang Zhong
Assistant Professor, Virginia Tech
C
Chonghuan Wang
University of Texas at Dallas