Autoregressive Direct Preference Optimization

๐Ÿ“… 2026-02-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses a key limitation in existing Direct Preference Optimization (DPO) methods, which neglect the autoregressive nature of language models when employing the Bradleyโ€“Terry preference model, thereby constraining alignment performance. To remedy this, the authors propose Autoregressive DPO (ADPO), which explicitly incorporates the autoregressive assumption into preference modeling by moving the summation operation outside the log-sigmoid function, thereby reformulating the DPO objective. This approach is the first to distinguish between token length and feedback length, integrating the autoregressive structure directly into the theoretical foundation of preference optimization. The resulting ADPO algorithm is both theoretically rigorous and computationally concise, offering a refined length-aware mechanism that enhances consistency between the optimization objective and the generative process, leading to improved alignment both theoretically and empirically.

Technology Category

Application Category

๐Ÿ“ Abstract
Direct preference optimization (DPO) has emerged as a promising approach for aligning large language models (LLMs) with human preferences. However, the widespread reliance on the response-level Bradley-Terry (BT) model may limit its full potential, as the reference and learnable models are assumed to be autoregressive only after deriving the objective function. Motivated by this limitation, we revisit the theoretical foundations of DPO and propose a novel formulation that explicitly introduces the autoregressive assumption prior to applying the BT model. By reformulating and extending DPO, we derive a novel variant, termed Autoregressive DPO (ADPO), that explicitly integrates autoregressive modeling into the preference optimization framework. Without violating the theoretical foundations, the derived loss takes an elegant form: it shifts the summation operation in the DPO objective outside the log-sigmoid function. Furthermore, through theoretical analysis of ADPO, we show that there exist two length measures to be considered when designing DPO-based algorithms: the token length $\mu$ and the feedback length $\mu$'. To the best of our knowledge, we are the first to explicitly distinguish these two measures and analyze their implications for preference optimization in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Direct Preference Optimization
Autoregressive Modeling
Bradley-Terry Model
Preference Alignment
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive DPO
Direct Preference Optimization
Bradley-Terry model
length measures
preference alignment
๐Ÿ”Ž Similar Papers
No similar papers found.