A Survey of Direct Preference Optimization

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Direct Preference Optimization (DPO) lacks a systematic taxonomy, standardized evaluation protocols, and unified theoretical foundations. Method: We propose the first four-dimensional DPO taxonomy—spanning data strategies, learning frameworks, constraint mechanisms, and model attributes—and establish a standardized empirical evaluation framework, conducting cross-method comparative analyses across multiple benchmarks. We further release an open-source, continuously updated DPO repository encompassing code, datasets, and reproducible scripts. Contribution/Results: This work achieves the first unified taxonomy, standardized evaluation, and fully reproducible implementation in DPO research. It provides both a theoretical framework and practical engineering guidelines for DPO, significantly enhancing the robustness and generalization of LLM alignment methods. By enabling rigorous, comparable, and reproducible experimentation, our contributions advance trustworthy and efficient human preference modeling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated unprecedented generative capabilities, yet their alignment with human values remains critical for ensuring helpful and harmless deployments. While Reinforcement Learning from Human Feedback (RLHF) has emerged as a powerful paradigm for aligning LLMs with human preferences, its reliance on complex reward modeling introduces inherent trade-offs in computational efficiency and training stability. In this context, Direct Preference Optimization (DPO) has recently gained prominence as a streamlined alternative that directly optimizes LLMs using human preferences, thereby circumventing the need for explicit reward modeling. Owing to its theoretical elegance and computational efficiency, DPO has rapidly attracted substantial research efforts exploring its various implementations and applications. However, this field currently lacks systematic organization and comparative analysis. In this survey, we conduct a comprehensive overview of DPO and introduce a novel taxonomy, categorizing previous works into four key dimensions: data strategy, learning framework, constraint mechanism, and model property. We further present a rigorous empirical analysis of DPO variants across standardized benchmarks. Additionally, we discuss real-world applications, open challenges, and future directions for DPO. This work delivers both a conceptual framework for understanding DPO and practical guidance for practitioners, aiming to advance robust and generalizable alignment paradigms. All collected resources are available and will be continuously updated at https://github.com/liushunyu/awesome-direct-preference-optimization.
Problem

Research questions and friction points this paper is trying to address.

Aligning Large Language Models with human values
Streamlining alignment using Direct Preference Optimization
Systematic organization and analysis of DPO methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization for LLMs
Eliminates explicit reward modeling
Categorizes DPO into four dimensions
🔎 Similar Papers
No similar papers found.
Shunyu Liu
Shunyu Liu
Nanyang Technological University
Multi-Agent LearningReinforcement LearningLarge Language ModelsPower System Control
W
Wenkai Fang
College of Computer Science and Technology, Zhejiang University, China
Z
Zetian Hu
School of Aerospace Engineering, Tsinghua University, China
J
Junjie Zhang
Nanyang Technological University, Singapore
Y
Yang Zhou
College of Computer Science and Technology, Zhejiang University, China
K
Kongcheng Zhang
College of Computer Science and Technology, Zhejiang University, China
R
Rongcheng Tu
Nanyang Technological University, Singapore
Ting-En Lin
Ting-En Lin
Alibaba Group, Tongyi
Natural Language ProcessingSpoken Dialogue SystemLarge Language ModelDeep Learning
F
Fei Huang
Tongyi Lab, Alibaba Group, China
M
Mingli Song
College of Computer Science and Technology, Zhejiang University, China
Y
Yongbin Li
Tongyi Lab, Alibaba Group, China
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining