🤖 AI Summary
Diffusion models for text-to-image generation often struggle to simultaneously preserve fine-grained prompt fidelity and compositional correctness, while existing reinforcement learning fine-tuning approaches suffer from high variance, distributional drift, and reward hacking. This work proposes the Centered Reward Distillation (CRD) framework, which introduces prompt-intrinsic centering during the forward diffusion process to eliminate normalization constants, enabling a stable and tractable reward-matching objective through KL-regularized reward maximization. By decoupling the sampler from the reference distribution, anchoring KL divergence with classifier-free guidance (CFG), and employing adaptive KL weighting, CRD effectively mitigates reward hacking and enhances training stability. Experiments demonstrate that CRD achieves rapid convergence under both GenEval and OCR rewards, attaining state-of-the-art performance while significantly reducing reward hacking on unseen preference metrics.
📝 Abstract
Diffusion and flow models achieve State-Of-The-Art (SOTA) generative performance, yet many practically important behaviors such as fine-grained prompt fidelity, compositional correctness, and text rendering are weakly specified by score or flow matching pretraining objectives. Reinforcement Learning (RL) fine-tuning with external, black-box rewards is a natural remedy, but diffusion RL is often brittle. Trajectory-based methods incur high memory cost and high-variance gradient estimates; forward-process approaches converge faster but can suffer from distribution drift, and hence reward hacking. In this work, we present \textbf{Centered Reward Distillation (CRD)}, a diffusion RL framework derived from KL-regularized reward maximization built on forward-process-based fine-tuning. The key insight is that the intractable normalizing constant cancels under \emph{within-prompt centering}, yielding a well-posed reward-matching objective. To enable reliable text-to-image fine-tuning, we introduce techniques that explicitly control distribution drift: (\textit{i}) decoupling the sampler from the moving reference to prevent ratio-signal collapse, (\textit{ii}) KL anchoring to a CFG-guided pretrained model to control long-run drift and align with the inference-time semantics of the pre-trained model, and (\textit{iii}) reward-adaptive KL strength to accelerate early learning under large KL regularization while reducing late-stage exploitation of reward-model loopholes. Experiments on text-to-image post-training with \texttt{GenEval} and \texttt{OCR} rewards show that CRD achieves competitive SOTA reward optimization results with fast convergence and reduced reward hacking, as validated on unseen preference metrics.