Linear Preference Optimization: Decoupled Gradient Control via Absolute Regularization

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While Direct Preference Optimization (DPO) offers training stability, it suffers from overfitting and model collapse. To address these issues, we propose Linear Preference Optimization (LPO), a novel preference alignment method that replaces the log-likelihood loss with an absolute difference loss, thereby decoupling gradient updates for preferred and dispreferred responses within each preference pair. LPO further introduces an offset constraint and a quality-preserving regularization term, enabling linearly controllable reduction of rejection probability. This design fundamentally mitigates gradient conflict and optimization imbalance, significantly enhancing training stability and robustness. Empirically, LPO consistently outperforms DPO across diverse tasks—including general text generation, mathematical reasoning, and speech synthesis—demonstrating strong generalization capability. All code, models, and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
DPO (Direct Preference Optimization) has become a widely used offline preference optimization algorithm due to its simplicity and training stability. However, DPO is prone to overfitting and collapse. To address these challenges, we propose Linear Preference Optimization (LPO), a novel alignment framework featuring three key innovations. First, we introduce gradient decoupling by replacing the log-sigmoid function with an absolute difference loss, thereby isolating the optimization dynamics. Second, we improve stability through an offset constraint combined with a positive regularization term to preserve the chosen response quality. Third, we implement controllable rejection suppression using gradient separation with straightforward estimation and a tunable coefficient that linearly regulates the descent of the rejection probability. Through extensive experiments, we demonstrate that LPO consistently improves performance on various tasks, including general text tasks, math tasks, and text-to-speech (TTS) tasks. These results establish LPO as a robust and tunable paradigm for preference alignment, and we release the source code, models, and training data publicly.
Problem

Research questions and friction points this paper is trying to address.

Addresses DPO's overfitting and collapse issues
Introduces gradient decoupling via absolute regularization
Implements controllable rejection suppression with tunable coefficients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient decoupling via absolute difference loss
Stability improvement with offset constraint regularization
Controllable rejection suppression through gradient separation
🔎 Similar Papers
No similar papers found.
R
Rui Wang
International Digital Economy Academy
Q
Qianguo Sun
International Digital Economy Academy
C
Chao Song
International Digital Economy Academy
Y
Yu Li
International Digital Economy Academy
J
Junlong Wu
Emdoor Collaborative Laboratory
Tianrong Chen
Tianrong Chen
Apple Machine Learning Research
machine learning
Z
Zhiyun Zeng
Emdoor Collaborative Laboratory