🤖 AI Summary
Classifier-Free Guidance (CFG) enforces a fixed sum of guidance weights equal to one, inducing an expectation shift in the generated distribution that violates the theoretical foundations of diffusion models. This work provides the first theoretical analysis demonstrating this inconsistency and proposes Rectified CFG (ReCFG), which relaxes the unit-sum constraint to ensure the weighted score function strictly satisfies the expectation consistency condition required by the diffusion process. ReCFG admits a closed-form analytical solution, requires no additional training, and can be deployed as a plug-and-play module atop any pretrained diffusion model. Experiments on ImageNet (using EDM2) and CC12M (using SD3) show that ReCFG significantly improves image fidelity and text–image alignment without measurable degradation in sampling speed—achieving stable, zero-finetuning performance gains across diverse architectures and datasets.
📝 Abstract
Classifier-Free Guidance (CFG), which combines the conditional and unconditional score functions with two coefficients summing to one, serves as a practical technique for diffusion model sampling. Theoretically, however, denoising with CFG cannot be expressed as a reciprocal diffusion process, which may consequently leave some hidden risks during use. In this work, we revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution. To rectify this issue, we propose ReCFG1 with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory. We further show that our approach enjoys a closed-form solution given the guidance strength. That way, the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected. Empirical evidence on real-world data demonstrate the compatibility of our post-hoc design with existing state-of-the-art diffusion models, including both class-conditioned ones (e.g., EDM2 on ImageNet) and text-conditioned ones (e.g., SD3 on CC12M), without any retraining. Code is available at https://github.com/thuxmf/recfg.