๐ค AI Summary
Diffusion models pose a challenge for direct application of policy gradientโbased reinforcement learning methods due to their intractable likelihood, and existing research lacks a systematic analysis of how likelihood estimation affects optimization. This work presents the first disentanglement of three key components in reinforcement learning for diffusion models: the policy gradient objective, the likelihood estimator, and the sampling strategy. The study reveals that the final-sample likelihood estimate based on the evidence lower bound (ELBO) is the dominant factor governing optimization efficacy, underscoring the centrality of likelihood estimation over reliance on loss function design. Experiments on SD 3.5 Medium demonstrate that the proposed approach improves the GenEval score from 0.24 to 0.95, achieves 4.6ร higher training efficiency than FlowGRPO and 2ร that of the state-of-the-art DiffusionNFT, and exhibits no reward hacking behavior.
๐ Abstract
Reinforcement learning has been widely applied to diffusion and flow models for visual tasks such as text-to-image generation. However, these tasks remain challenging because diffusion models have intractable likelihoods, which creates a barrier for directly applying popular policy-gradient type methods. Existing approaches primarily focus on crafting new objectives built on already heavily engineered LLM objectives, using ad hoc estimators for likelihood, without a thorough investigation into how such estimation affects overall algorithmic performance. In this work, we provide a systematic analysis of the RL design space by disentangling three factors: i) policy-gradient objectives, ii) likelihood estimators, and iii) rollout sampling schemes. We show that adopting an evidence lower bound (ELBO) based model likelihood estimator, computed only from the final generated sample, is the dominant factor enabling effective, efficient, and stable RL optimization, outweighing the impact of the specific policy-gradient loss functional. We validate our findings across multiple reward benchmarks using SD 3.5 Medium, and observe consistent trends across all tasks. Our method improves the GenEval score from 0.24 to 0.95 in 90 GPU hours, which is $4.6\times$ more efficient than FlowGRPO and $2\times$ more efficient than the SOTA method DiffusionNFT without reward hacking.