DeDPO: Debiased Direct Preference Optimization for Diffusion Models

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and poor scalability of diffusion model preference alignment, which typically relies on large-scale, high-quality human annotations. To mitigate this limitation, the authors propose Debiased Direct Preference Optimization (DeDPO), a framework that integrates a small amount of human preference data with abundant, low-cost AI-generated feedback. DeDPO is the first to incorporate debiasing techniques from causal inference into the DPO objective, effectively correcting systematic biases and noise inherent in synthetic labels. By combining self-training with preferences synthesized by vision-language models, DeDPO demonstrates robust performance across various synthetic annotation settings, matching or even surpassing the theoretical upper bound achieved with fully human-annotated data. This approach substantially reduces reliance on expensive human annotations while enhancing model robustness and generalization under imperfect supervision.

Technology Category

Application Category

📝 Abstract
Direct Preference Optimization (DPO) has emerged as a predominant alignment method for diffusion models, facilitating off-policy training without explicit reward modeling. However, its reliance on large-scale, high-quality human preference labels presents a severe cost and scalability bottleneck. To overcome this, We propose a semi-supervised framework augmenting limited human data with a large corpus of unlabeled pairs annotated via cost-effective synthetic AI feedback. Our paper introduces Debiased DPO (DeDPO), which uniquely integrates a debiased estimation technique from causal inference into the DPO objective. By explicitly identifying and correcting the systematic bias and noise inherent in synthetic annotators, DeDPO ensures robust learning from imperfect feedback sources, including self-training and Vision-Language Models (VLMs). Experiments demonstrate that DeDPO is robust to the variations in synthetic labeling methods, achieving performance that matches and occasionally exceeds the theoretical upper bound of models trained on fully human-labeled data. This establishes DeDPO as a scalable solution for human-AI alignment using inexpensive synthetic supervision.
Problem

Research questions and friction points this paper is trying to address.

Direct Preference Optimization
diffusion models
human preference labels
scalability
synthetic feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Debiased DPO
synthetic feedback
causal inference
diffusion models
preference optimization
🔎 Similar Papers
No similar papers found.