π€ AI Summary
Existing reinforcement learning (RL)-based fine-tuning of text-to-image diffusion models suffers from sparse reward signals: each generation yields only a single delayed reward, hindering step-level action attribution and resulting in inefficient training. To address this, we propose a model-free, architecture-free dynamic dense reward allocation mechanism. Our approach introduces a novel step-wise credit assignment framework grounded in the cosine similarity change between intermediate and final denoised images, augmented by reward shaping to emphasize critical denoising steps. Without degrading the original policyβs performance, our method improves sample efficiency by 1.25β2Γ and demonstrates superior generalization across four human preference-based reward functions. It effectively mitigates both inaccurate step-level attribution and training inefficiency inherent in sparse-reward RL fine-tuning of diffusion models.
π Abstract
Recent advances in text-to-image (T2I) diffusion model fine-tuning leverage reinforcement learning (RL) to align generated images with learnable reward functions. The existing approaches reformulate denoising as a Markov decision process for RL-driven optimization. However, they suffer from reward sparsity, receiving only a single delayed reward per generated trajectory. This flaw hinders precise step-level attribution of denoising actions, undermines training efficiency. To address this, we propose a simple yet effective credit assignment framework that dynamically distributes dense rewards across denoising steps. Specifically, we track changes in cosine similarity between intermediate and final images to quantify each step's contribution on progressively reducing the distance to the final image. Our approach avoids additional auxiliary neural networks for step-level preference modeling and instead uses reward shaping to highlight denoising phases that have a greater impact on image quality. Our method achieves 1.25 to 2 times higher sample efficiency and better generalization across four human preference reward functions, without compromising the original optimal policy.