DC-Gen: Post-Training Diffusion Acceleration with Deeply Compressed Latent Space

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency and underutilized redundancy in latent spaces of high-resolution (e.g., 4K) text-to-image diffusion models, this paper proposes DC-Gen—a post-training acceleration framework requiring no retraining. Methodologically, it introduces deep compression of the latent space for the first time in diffusion model acceleration; designs a lightweight embedding alignment strategy to mitigate representation drift induced by compression; and integrates LoRA fine-tuning with NVFP4 SVDQuant quantization for efficient adaptation. Evaluated on SANA and FLUX.1-Krea, DC-Gen-FLUX reduces 4K image generation latency by 53× while preserving generation quality. With additional quantization, it achieves end-to-end 4K synthesis in just 3.5 seconds on a single NVIDIA RTX 5090 GPU—yielding an overall speedup of 138×.

Technology Category

Application Category

📝 Abstract
Existing text-to-image diffusion models excel at generating high-quality images, but face significant efficiency challenges when scaled to high resolutions, like 4K image generation. While previous research accelerates diffusion models in various aspects, it seldom handles the inherent redundancy within the latent space. To bridge this gap, this paper introduces DC-Gen, a general framework that accelerates text-to-image diffusion models by leveraging a deeply compressed latent space. Rather than a costly training-from-scratch approach, DC-Gen uses an efficient post-training pipeline to preserve the quality of the base model. A key challenge in this paradigm is the representation gap between the base model's latent space and a deeply compressed latent space, which can lead to instability during direct fine-tuning. To overcome this, DC-Gen first bridges the representation gap with a lightweight embedding alignment training. Once the latent embeddings are aligned, only a small amount of LoRA fine-tuning is needed to unlock the base model's inherent generation quality. We verify DC-Gen's effectiveness on SANA and FLUX.1-Krea. The resulting DC-Gen-SANA and DC-Gen-FLUX models achieve quality comparable to their base models but with a significant speedup. Specifically, DC-Gen-FLUX reduces the latency of 4K image generation by 53x on the NVIDIA H100 GPU. When combined with NVFP4 SVDQuant, DC-Gen-FLUX generates a 4K image in just 3.5 seconds on a single NVIDIA 5090 GPU, achieving a total latency reduction of 138x compared to the base FLUX.1-Krea model. Code: https://github.com/dc-ai-projects/DC-Gen.
Problem

Research questions and friction points this paper is trying to address.

Accelerates text-to-image diffusion models for high-resolution generation
Reduces latent space redundancy through deep compression techniques
Maintains base model quality while achieving significant speed improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Accelerates diffusion models via compressed latent space
Uses post-training pipeline with embedding alignment
Achieves speedup with minimal LoRA fine-tuning
🔎 Similar Papers
No similar papers found.