Beyond Reward Margin: Rethinking and Resolving Likelihood Displacement in Diffusion Models via Video Generation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DPO suffers from significant likelihood displacement in diffusion models—particularly for video generation—where preferred samples exhibit anomalously reduced generation probabilities, degrading output quality. This work presents the first systematic analysis of this phenomenon within diffusion modeling, identifying its root causes and downstream impacts. We propose Policy-Guided DPO (PG-DPO), a novel framework featuring two key innovations: (1) Adaptive Rejection Scaling (ARS), which mitigates gradient conflicts arising from disparate reward magnitudes; and (2) Implicit Preference Regularization (IPR), which jointly constrains policy updates with ARS to suppress suboptimal maximization. Evaluated across multiple video generation benchmarks, PG-DPO consistently improves FID, FVD, and human preference scores. Qualitative results demonstrate simultaneous gains in temporal consistency and visual fidelity. Our approach establishes a stable, scalable paradigm for preference-aligned training of diffusion models.

Technology Category

Application Category

📝 Abstract
Direct Preference Optimization (DPO) has shown promising results in aligning generative outputs with human preferences by distinguishing between chosen and rejected samples. However, a critical limitation of DPO is likelihood displacement, where the probabilities of chosen samples paradoxically decrease during training, undermining the quality of generation. Although this issue has been investigated in autoregressive models, its impact within diffusion-based models remains largely unexplored. This gap leads to suboptimal performance in tasks involving video generation. To address this, we conduct a formal analysis of DPO loss through updating policy within the diffusion framework, which describes how the updating of specific training samples influences the model's predictions on other samples. Using this tool, we identify two main failure modes: (1) Optimization Conflict, which arises from small reward margins between chosen and rejected samples, and (2) Suboptimal Maximization, caused by large reward margins. Informed by these insights, we introduce a novel solution named Policy-Guided DPO (PG-DPO), combining Adaptive Rejection Scaling (ARS) and Implicit Preference Regularization (IPR) to effectively mitigate likelihood displacement. Experiments show that PG-DPO outperforms existing methods in both quantitative metrics and qualitative evaluations, offering a robust solution for improving preference alignment in video generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses likelihood displacement issue in diffusion models during training
Resolves optimization conflicts from small reward margins in DPO
Improves preference alignment for video generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Policy-Guided DPO for diffusion models
Combines Adaptive Rejection Scaling with Implicit Preference Regularization
Mitigates likelihood displacement in video generation tasks
🔎 Similar Papers
No similar papers found.
R
Ruojun Xu
Zhejiang University
Y
Yu Kai
Tencent
X
Xuhua Ren
Tencent
J
Jiaxiang Cheng
Tencent
Bing Ma
Bing Ma
Marshall Institute for Interdisciplinary Research, Marshall University, Huntington, WV 25755, USA
Plastic surgerynanofiberimmunologytissue engineering
T
Tianxiang Zheng
Tencent
Q
Qinglin Lu
Tencent