🤖 AI Summary
Diffusion models for text-to-image generation often suffer from imprecise alignment with user intent and inconsistent aesthetic quality. Existing preference-based optimization methods (e.g., DDPO) rely on costly, noisy human-annotated preference data. This paper proposes the first annotation-free, temporal-level preference optimization framework: it introduces implicit win/loss policy supervision at each denoising step, enabling dense transition-level preference learning without dependence on final image samples or explicit reward modeling. The method integrates a pretrained reference model, semantic degradation-aware prompt contrast, diffusion score-space policy contrast, and step-wise policy optimization. Experiments demonstrate that our approach matches or surpasses state-of-the-art preference optimization methods in both alignment fidelity and visual quality, while reducing supervision overhead by over an order of magnitude.
📝 Abstract
Diffusion models have achieved impressive results in generative tasks such as text-to-image synthesis, yet they often struggle to fully align outputs with nuanced user intent and maintain consistent aesthetic quality. Existing preference-based training methods like Diffusion Direct Preference Optimization help address these issues but rely on costly and potentially noisy human-labeled datasets. In this work, we introduce Direct Diffusion Score Preference Optimization (DDSPO), which directly derives per-timestep supervision from winning and losing policies when such policies are available. Unlike prior methods that operate solely on final samples, DDSPO provides dense, transition-level signals across the denoising trajectory. In practice, we avoid reliance on labeled data by automatically generating preference signals using a pretrained reference model: we contrast its outputs when conditioned on original prompts versus semantically degraded variants. This practical strategy enables effective score-space preference supervision without explicit reward modeling or manual annotations. Empirical results demonstrate that DDSPO improves text-image alignment and visual quality, outperforming or matching existing preference-based methods while requiring significantly less supervision. Our implementation is available at: https://dohyun-as.github.io/DDSPO