🤖 AI Summary
To address the distribution shift and degraded semantic alignment caused by static, large guidance weights in classifier-free guidance (CFG), this paper proposes dynamic CFG: a continuously differentiable guidance weight function learned end-to-end, conditioned on both the input context and the denoising start/target timestep. The function is optimized by minimizing the discrepancy between the conditional distribution and the true data distribution, and inherently supports reward-based guidance (e.g., CLIP Score) for downstream-targeted generation distribution control. Integrated into standard diffusion frameworks, dynamic CFG requires only joint training of a lightweight weight network without modifying the backbone model. Experiments on image generation and text-to-image synthesis demonstrate significant FID reduction and improved prompt–image alignment, validating its dual advantages in perceptual quality and distribution fidelity.
📝 Abstract
Classifier-free guidance (CFG) is a widely used technique for improving the perceptual quality of samples from conditional diffusion models. It operates by linearly combining conditional and unconditional score estimates using a guidance weight $ω$. While a large, static weight can markedly improve visual results, this often comes at the cost of poorer distributional alignment. In order to better approximate the target conditional distribution, we instead learn guidance weights $ω_{c,(s,t)}$, which are continuous functions of the conditioning $c$, the time $t$ from which we denoise, and the time $s$ towards which we denoise. We achieve this by minimizing the distributional mismatch between noised samples from the true conditional distribution and samples from the guided diffusion process. We extend our framework to reward guided sampling, enabling the model to target distributions tilted by a reward function $R(x_0,c)$, defined on clean data and a conditioning $c$. We demonstrate the effectiveness of our methodology on low-dimensional toy examples and high-dimensional image settings, where we observe improvements in Fréchet inception distance (FID) for image generation. In text-to-image applications, we observe that employing a reward function given by the CLIP score leads to guidance weights that improve image-prompt alignment.