π€ AI Summary
This work addresses the inefficiency of iterative sampling in diffusion models and the performance limitations imposed by conventional distillation approaches. To overcome these challenges, the authors propose a novel paradigm that reformulates distribution matching as a reward signal $R_{dm}$ within a unified framework integrating diffusion distillation and reinforcement learning. The method introduces Group-Normalized Distribution Matching (GNDM) to enhance optimization stability and enables adaptive multi-reward fusion with importance sampling. Experimental results demonstrate that GNDM reduces the FrΓ©chet Inception Distance (FID) by 1.87 compared to baseline methods. Furthermore, its multi-reward variant, GNDMR, achieves a Human Preference Score (HPS) of 30.37 while lowering FID-SD to 12.21, significantly improving aesthetic quality without compromising fidelity.
π Abstract
Diffusion models achieve state-of-the-art generative performance but are fundamentally bottlenecked by their slow iterative sampling process. While diffusion distillation techniques enable high-fidelity few-step generation, traditional objectives often restrict the student's performance by anchoring it solely to the teacher. Recent approaches have attempted to break this ceiling by integrating Reinforcement Learning (RL), typically through a simple summation of distillation and RL objectives. In this work, we propose a novel paradigm by reconceptualizing distribution matching as a reward, denoted as $R_{dm}$. This unified perspective bridges the algorithmic gap between Diffusion Matching Distillation (DMD) and RL, providing several key benefits. (1) Enhanced optimization stability: we introduce Group Normalized Distribution Matching (GNDM), which adapts standard RL group normalization to stabilize $R_{dm}$ estimation. By leveraging group-mean statistics, GNDM establishes a more robust and effective optimization direction. (2) Seamless reward integration: our reward-centric formulation inherently supports adaptive weighting mechanisms, allowing flexible combination of DMD with external reward models. (3) Improved sampling efficiency: by aligning with RL principles, the framework readily incorporates importance sampling (IS), leading to a significant boost in sampling efficiency. Extensive experiments demonstrate that GNDM outperforms vanilla DMD, reducing the FID by 1.87. Furthermore, our multi-reward variant, GNDMR, surpasses existing baselines by achieving a strong balance between aesthetic quality and fidelity, reaching a peak HPS of 30.37 and a low FID-SD of 12.21. Overall, $R_{dm}$ provides a flexible, stable, and efficient framework for real-time high-fidelity synthesis. Code will be released upon publication.