🤖 AI Summary
Classifier-free guidance (CFG) improves text-to-image generation quality but incurs high inference overhead due to dual forward passes and complex sampling. This paper proposes TeEFusion, the first method to explicitly encode guidance scale into text embeddings via linear fusion of conditional and unconditional text representations, augmented by knowledge distillation. The student model thus replicates the teacher’s CFG behavior without additional parameters or architectural modifications. Crucially, TeEFusion eliminates CFG sampling entirely, enabling efficient single-pass generation. Evaluated on state-of-the-art models including SD3, it achieves up to 6× faster inference while matching CFG baseline image quality. The core contribution is a novel “guidance-scale embedding” paradigm—scalable, architecture-agnostic, and parameter-efficient—providing a practical lightweight alternative for diffusion model deployment.
📝 Abstract
Recent advances in text-to-image synthesis largely benefit from sophisticated sampling strategies and classifier-free guidance (CFG) to ensure high-quality generation. However, CFG's reliance on two forward passes, especially when combined with intricate sampling algorithms, results in prohibitively high inference costs. To address this, we introduce TeEFusion ( extbf{Te}xt extbf{E}mbeddings extbf{Fusion}), a novel and efficient distillation method that directly incorporates the guidance magnitude into the text embeddings and distills the teacher model's complex sampling strategy. By simply fusing conditional and unconditional text embeddings using linear operations, TeEFusion reconstructs the desired guidance without adding extra parameters, simultaneously enabling the student model to learn from the teacher's output produced via its sophisticated sampling approach. Extensive experiments on state-of-the-art models such as SD3 demonstrate that our method allows the student to closely mimic the teacher's performance with a far simpler and more efficient sampling strategy. Consequently, the student model achieves inference speeds up to 6$ imes$ faster than the teacher model, while maintaining image quality at levels comparable to those obtained through the teacher's complex sampling approach. The code is publicly available at href{https://github.com/AIDC-AI/TeEFusion}{github.com/AIDC-AI/TeEFusion}.