Unconditional Priors Matter! Improving Conditional Generation of Fine-Tuned Diffusion Models

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In classifier-free guidance (CFG) training, joint optimization of conditional and unconditional noise predictions degrades the quality of the unconditional prior, severely compromising conditional generation fidelity. Method: This work is the first to identify the critical role of unconditional prior quality in CFG performance and proposes a cross-model unconditional noise replacement mechanism: during inference, high-quality unconditional noise predictions from a pretrained base model replace the low-quality ones from the fine-tuned model, thereby decoupling conditional and unconditional modeling. The approach requires no modification to training objectives or network architecture—only plug-and-play prior transfer. Results: Evaluated on state-of-the-art image/video generation models—including Zero-1-to-3, Versatile Diffusion, DiT, DynamiCrafter, and InstructPix2Pix—the method consistently improves generation quality and alignment with text instructions, demonstrating both the efficacy and broad generalizability of unconditional prior optimization.

Technology Category

Application Category

📝 Abstract
Classifier-Free Guidance (CFG) is a fundamental technique in training conditional diffusion models. The common practice for CFG-based training is to use a single network to learn both conditional and unconditional noise prediction, with a small dropout rate for conditioning. However, we observe that the joint learning of unconditional noise with limited bandwidth in training results in poor priors for the unconditional case. More importantly, these poor unconditional noise predictions become a serious reason for degrading the quality of conditional generation. Inspired by the fact that most CFG-based conditional models are trained by fine-tuning a base model with better unconditional generation, we first show that simply replacing the unconditional noise in CFG with that predicted by the base model can significantly improve conditional generation. Furthermore, we show that a diffusion model other than the one the fine-tuned model was trained on can be used for unconditional noise replacement. We experimentally verify our claim with a range of CFG-based conditional models for both image and video generation, including Zero-1-to-3, Versatile Diffusion, DiT, DynamiCrafter, and InstructPix2Pix.
Problem

Research questions and friction points this paper is trying to address.

Improving conditional generation in fine-tuned diffusion models
Addressing poor unconditional noise prediction in CFG training
Enhancing quality by replacing unconditional noise with base model predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replacing unconditional noise with base model predictions
Using external diffusion models for noise replacement
Improving conditional generation via better priors
🔎 Similar Papers
No similar papers found.