🤖 AI Summary
To address the three key challenges in reward-guided fine-tuning of diffusion models—reduced sample diversity, erosion of pretrained priors, and slow convergence—this paper proposes Nabla-GFlowNet. It is the first method to incorporate reward gradient signals into Generative Flow Networks (GFlowNets), introducing a gradient-aware alignment objective (∇DB) and its residual variant (Res∇DB) to jointly optimize for diversity preservation and prior retention. The approach integrates diffusion model gradient inversion, text-conditional sampling, and reward-guided flow modeling. Extensive experiments across diverse real-world reward functions demonstrate that Nabla-GFlowNet significantly accelerates convergence, enhances sample diversity and distribution fidelity, and—critically—strictly preserves the original pretrained priors of models such as Stable Diffusion.
📝 Abstract
While one commonly trains large diffusion models by collecting datasets on target downstream tasks, it is often desired to align and finetune pretrained diffusion models with some reward functions that are either designed by experts or learned from small-scale datasets. Existing post-training methods for reward finetuning of diffusion models typically suffer from lack of diversity in generated samples, lack of prior preservation, and/or slow convergence in finetuning. Inspired by recent successes in generative flow networks (GFlowNets), a class of probabilistic models that sample with the unnormalized density of a reward function, we propose a novel GFlowNet method dubbed Nabla-GFlowNet (abbreviated as methodname), the first GFlowNet method that leverages the rich signal in reward gradients, together with an objective called graddb plus its variant
esgraddb designed for prior-preserving diffusion finetuning. We show that our proposed method achieves fast yet diversity- and prior-preserving finetuning of Stable Diffusion, a large-scale text-conditioned image diffusion model, on different realistic reward functions.