Reinforcing Diffusion Models by Direct Group Preference Optimization

๐Ÿ“… 2025-10-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing reinforcement learning (RL) optimization methods for diffusion modelsโ€”such as Group Relative Policy Optimization (RPO)โ€”rely on stochastic policies, rendering them incompatible with efficient deterministic ODE samplers, thereby impairing training speed and stability. This work proposes Direct Group Preference Optimization (DGPO), the first online RL framework for diffusion models that eliminates the stochastic policy requirement and operates without policy gradients. DGPO performs end-to-end optimization using relative preference signals among samples within a group and explicitly supports deterministic ODE sampling for rapid, stable training. Its core innovation lies in tightly coupling group-level preference learning with direct gradient updates, thereby decoupling sampling from optimization and overcoming a fundamental bottleneck. Experiments demonstrate that DGPO achieves approximately 20ร— faster training than state-of-the-art methods while attaining statistically significant improvements in both in-domain and cross-domain reward metrics.

Technology Category

Application Category

๐Ÿ“ Abstract
While reinforcement learning methods such as Group Relative Preference Optimization (GRPO) have significantly enhanced Large Language Models, adapting them to diffusion models remains challenging. In particular, GRPO demands a stochastic policy, yet the most cost-effective diffusion samplers are based on deterministic ODEs. Recent work addresses this issue by using inefficient SDE-based samplers to induce stochasticity, but this reliance on model-agnostic Gaussian noise leads to slow convergence. To resolve this conflict, we propose Direct Group Preference Optimization (DGPO), a new online RL algorithm that dispenses with the policy-gradient framework entirely. DGPO learns directly from group-level preferences, which utilize relative information of samples within groups. This design eliminates the need for inefficient stochastic policies, unlocking the use of efficient deterministic ODE samplers and faster training. Extensive results show that DGPO trains around 20 times faster than existing state-of-the-art methods and achieves superior performance on both in-domain and out-of-domain reward metrics. Code is available at https://github.com/Luo-Yihong/DGPO.
Problem

Research questions and friction points this paper is trying to address.

Optimizing diffusion models using efficient deterministic samplers
Eliminating reliance on slow stochastic policies for preference learning
Accelerating training while maintaining superior reward performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

DGPO uses group-level preferences for direct optimization
Eliminates stochastic policies to enable deterministic samplers
Trains 20x faster while maintaining superior performance