Rethinking Direct Preference Optimization in Diffusion Models

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient policy exploration and training instability in aligning text-to-image (T2I) diffusion models with human preferences, this paper proposes a novel direct preference optimization paradigm. Methodologically, it introduces two key innovations: (1) a relaxed reference model update strategy that abandons the conventional frozen-reference constraint, thereby enhancing policy exploration; and (2) a timestep-aware training mechanism integrating dynamic reward weighting and reference-model regularization to mitigate inter-timestep reward scale imbalance. The approach constitutes a plug-and-play framework compatible with mainstream preference optimization algorithms. Empirically, it achieves significant improvements over state-of-the-art methods across multiple human preference evaluation benchmarks—demonstrating superior alignment performance while simultaneously enhancing both exploratory capability and training stability.

Technology Category

Application Category

📝 Abstract
Aligning text-to-image (T2I) diffusion models with human preferences has emerged as a critical research challenge. While recent advances in this area have extended preference optimization techniques from large language models (LLMs) to the diffusion setting, they often struggle with limited exploration. In this work, we propose a novel and orthogonal approach to enhancing diffusion-based preference optimization. First, we introduce a stable reference model update strategy that relaxes the frozen reference model, encouraging exploration while maintaining a stable optimization anchor through reference model regularization. Second, we present a timestep-aware training strategy that mitigates the reward scale imbalance problem across timesteps. Our method can be integrated into various preference optimization algorithms. Experimental results show that our approach improves the performance of state-of-the-art methods on human preference evaluation benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Aligning text-to-image diffusion models with human preferences
Overcoming limited exploration in preference optimization techniques
Addressing reward scale imbalance across timesteps in diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stable reference model update strategy
Timestep-aware training strategy
Integration into preference optimization algorithms