TeEFusion: Blending Text Embeddings to Distill Classifier-Free Guidance

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Classifier-free guidance (CFG) improves text-to-image generation quality but incurs high inference overhead due to dual forward passes and complex sampling. This paper proposes TeEFusion, the first method to explicitly encode guidance scale into text embeddings via linear fusion of conditional and unconditional text representations, augmented by knowledge distillation. The student model thus replicates the teacher’s CFG behavior without additional parameters or architectural modifications. Crucially, TeEFusion eliminates CFG sampling entirely, enabling efficient single-pass generation. Evaluated on state-of-the-art models including SD3, it achieves up to 6× faster inference while matching CFG baseline image quality. The core contribution is a novel “guidance-scale embedding” paradigm—scalable, architecture-agnostic, and parameter-efficient—providing a practical lightweight alternative for diffusion model deployment.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image synthesis largely benefit from sophisticated sampling strategies and classifier-free guidance (CFG) to ensure high-quality generation. However, CFG's reliance on two forward passes, especially when combined with intricate sampling algorithms, results in prohibitively high inference costs. To address this, we introduce TeEFusion ( extbf{Te}xt extbf{E}mbeddings extbf{Fusion}), a novel and efficient distillation method that directly incorporates the guidance magnitude into the text embeddings and distills the teacher model's complex sampling strategy. By simply fusing conditional and unconditional text embeddings using linear operations, TeEFusion reconstructs the desired guidance without adding extra parameters, simultaneously enabling the student model to learn from the teacher's output produced via its sophisticated sampling approach. Extensive experiments on state-of-the-art models such as SD3 demonstrate that our method allows the student to closely mimic the teacher's performance with a far simpler and more efficient sampling strategy. Consequently, the student model achieves inference speeds up to 6$ imes$ faster than the teacher model, while maintaining image quality at levels comparable to those obtained through the teacher's complex sampling approach. The code is publicly available at href{https://github.com/AIDC-AI/TeEFusion}{github.com/AIDC-AI/TeEFusion}.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference costs in text-to-image synthesis
Distilling complex sampling strategies into simpler models
Maintaining image quality while speeding up generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses conditional and unconditional text embeddings
Distills teacher model's complex sampling strategy
Enables 6x faster inference without quality loss
🔎 Similar Papers
No similar papers found.
M
Minghao Fu
School of Artificial Intelligence, Nanjing University; National Key Laboratory for Novel Software Technology, Nanjing University; Alibaba International Digital Commerce Group
Guo-Hua Wang
Guo-Hua Wang
Alibaba
Machine LearningDeep Learning
X
Xiaohao Chen
Alibaba International Digital Commerce Group
Qing-Guo Chen
Qing-Guo Chen
alibaba-inc
machine learning
Z
Zhao Xu
Alibaba International Digital Commerce Group
Weihua Luo
Weihua Luo
Alibaba
natural language processingmachine learningartificial intelligence
Kaifu Zhang
Kaifu Zhang
Assistant Professor of Marketing, Carnegie Mellon University
Two-sided marketsInternet platformse-commerce