🤖 AI Summary
DPO suffers from significant likelihood displacement in diffusion models—particularly for video generation—where preferred samples exhibit anomalously reduced generation probabilities, degrading output quality. This work presents the first systematic analysis of this phenomenon within diffusion modeling, identifying its root causes and downstream impacts. We propose Policy-Guided DPO (PG-DPO), a novel framework featuring two key innovations: (1) Adaptive Rejection Scaling (ARS), which mitigates gradient conflicts arising from disparate reward magnitudes; and (2) Implicit Preference Regularization (IPR), which jointly constrains policy updates with ARS to suppress suboptimal maximization. Evaluated across multiple video generation benchmarks, PG-DPO consistently improves FID, FVD, and human preference scores. Qualitative results demonstrate simultaneous gains in temporal consistency and visual fidelity. Our approach establishes a stable, scalable paradigm for preference-aligned training of diffusion models.
📝 Abstract
Direct Preference Optimization (DPO) has shown promising results in aligning generative outputs with human preferences by distinguishing between chosen and rejected samples. However, a critical limitation of DPO is likelihood displacement, where the probabilities of chosen samples paradoxically decrease during training, undermining the quality of generation. Although this issue has been investigated in autoregressive models, its impact within diffusion-based models remains largely unexplored. This gap leads to suboptimal performance in tasks involving video generation. To address this, we conduct a formal analysis of DPO loss through updating policy within the diffusion framework, which describes how the updating of specific training samples influences the model's predictions on other samples. Using this tool, we identify two main failure modes: (1) Optimization Conflict, which arises from small reward margins between chosen and rejected samples, and (2) Suboptimal Maximization, caused by large reward margins. Informed by these insights, we introduce a novel solution named Policy-Guided DPO (PG-DPO), combining Adaptive Rejection Scaling (ARS) and Implicit Preference Regularization (IPR) to effectively mitigate likelihood displacement. Experiments show that PG-DPO outperforms existing methods in both quantitative metrics and qualitative evaluations, offering a robust solution for improving preference alignment in video generation tasks.