Preference Optimization by Estimating the Ratio of the Data Distribution

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental trade-off between generation fidelity and diversity in preference optimization. We propose Bregman Preference Optimization (BPO), a reward-model-free, partition-function-free end-to-end alignment framework that directly estimates the data distribution ratio between the target and reference policies. First, we establish a general theoretical framework for ratio matching based on Bregman divergences—unifying and rigorously generalizing DPO while guaranteeing optimality. Second, we introduce Scalable Bregman Alignment (SBA), a gradient-scaling technique that overcomes the fidelity–diversity trade-off inherent in f-PO–style methods. Evaluated on Llama-3-Instruct-8B, BPO achieves a 55.9% win rate on AlpacaEval2 with length control—significantly outperforming DPO, f-DPO, and f-PO. Crucially, this gain is accompanied by increased output entropy, preserving generative diversity. BPO thus sets a new state-of-the-art for the Llama-3-8B backbone.

Technology Category

Application Category

📝 Abstract
Direct preference optimization (DPO) is widely used as a simple and stable method for aligning large language models (LLMs) with human preferences. This paper investigates a generalized DPO loss that enables a policy model to match the target policy from a likelihood ratio estimation perspective. The ratio of the target policy provides a unique identification of the policy distribution without relying on reward models or partition functions. This allows the generalized loss to retain both simplicity and theoretical guarantees, which prior work such as $f$-PO fails to achieve simultaneously. We propose Bregman preference optimization (BPO), a generalized framework for ratio matching that provides a family of objective functions achieving target policy optimality. BPO subsumes DPO as a special case and offers tractable forms for all instances, allowing implementation with a few lines of code. We further develop scaled Basu's power divergence (SBA), a gradient scaling method that can be used for BPO instances. The BPO framework complements other DPO variants and is applicable to target policies defined by these variants. In experiments, unlike other probabilistic loss extensions such as $f$-DPO or $f$-PO, which exhibit a trade-off between generation fidelity and diversity, instances of BPO improve both win rate and entropy compared with DPO. When applied to Llama-3-Instruct-8B, BPO achieves state-of-the-art performance among Llama-3-8B backbones, with a 55.9% length-controlled win rate on AlpacaEval2.
Problem

Research questions and friction points this paper is trying to address.

Generalizes DPO loss for policy matching via likelihood ratios
Proposes BPO framework for ratio matching with optimality guarantees
Improves win rate and diversity without fidelity trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized DPO loss via likelihood ratio estimation
Bregman preference optimization for ratio matching
Scaled Basu's power divergence for gradient scaling
Y
Yeongmin Kim
Korea Advanced Institute of Science and Technology (KAIST)
H
Heesun Bae
Korea Advanced Institute of Science and Technology (KAIST)
Byeonghu Na
Byeonghu Na
KAIST
Generative ModelDiffusion Model
Il-Chul Moon
Il-Chul Moon
Professor, Department of Industrial and Systems Engineering, KAIST
Modeling and SimulationArtificial Intelligence