Distributionally Robust Direct Preference Optimization

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in LLM alignment caused by distributional shifts in user preferences across geographic, demographic, linguistic, and cultural dimensions, this paper proposes the first distributionally robust direct preference optimization (DRO-DPO) framework. Methodologically, it introduces two novel algorithms—Wasserstein-DPO and KL-DPO—that model preference distribution uncertainty via Wasserstein distance and KL divergence, respectively, within a minimax optimization paradigm. Theoretical analysis characterizes their sample complexity, and a scalable gradient-descent-based solver is developed. Contributions include: (i) the first formal DRO formulation for preference optimization; (ii) principled algorithmic designs with provable generalization guarantees; and (iii) empirical validation showing significant improvements over standard DPO and existing robust baselines under multi-source preference shift, enhancing both alignment stability and cross-distribution generalization.

Technology Category

Application Category

📝 Abstract
A major challenge in aligning large language models (LLMs) with human preferences is the issue of distribution shift. LLM alignment algorithms rely on static preference datasets, assuming that they accurately represent real-world user preferences. However, user preferences vary significantly across geographical regions, demographics, linguistic patterns, and evolving cultural trends. This preference distribution shift leads to catastrophic alignment failures in many real-world applications. We address this problem using the principled framework of distributionally robust optimization, and develop two novel distributionally robust direct preference optimization (DPO) algorithms, namely, Wasserstein DPO (WDPO) and Kullback-Leibler DPO (KLDPO). We characterize the sample complexity of learning the optimal policy parameters for WDPO and KLDPO. Moreover, we propose scalable gradient descent-style learning algorithms by developing suitable approximations for the challenging minimax loss functions of WDPO and KLDPO. Our empirical experiments demonstrate the superior performance of WDPO and KLDPO in substantially improving the alignment when there is a preference distribution shift.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution shift in LLM alignment.
Develops robust optimization algorithms for preference alignment.
Improves model alignment across diverse user preferences.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally Robust Optimization framework
Wasserstein and Kullback-Leibler DPO algorithms
Scalable gradient descent-style learning algorithms