DA-VAE: Plug-in Latent Compression for Diffusion via Detail Alignment

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of high-resolution latent diffusion models, which stems from the lack of structural information in high-dimensional latent spaces and often necessitates costly model retraining for compression. To overcome this, the authors propose an efficient compression framework that requires no retraining of the diffusion model. By expanding the latent channels of a pretrained VAE to preserve spatial structure and encode high-frequency details, and integrating an explicit detail alignment mechanism with a lightweight warm-start fine-tuning strategy, the method achieves significant gains in both compression and inference speed. Evaluated on Stable Diffusion 3.5, it enables 1024×1024 image generation using only 32×32 latent tokens—quadrupling compression—and further scales to 2048×2048 resolution with a sixfold inference speedup while maintaining visual fidelity, demonstrating strong performance on ImageNet.

Technology Category

Application Category

📝 Abstract
Reducing token count is crucial for efficient training and inference of latent diffusion models, especially at high resolution. A common strategy is to build high-compression image tokenizers with more channels per token. However, when trained only for reconstruction, high-dimensional latent spaces often lose meaningful structure, making diffusion training harder. Existing methods address this with extra objectives such as semantic alignment or selective dropout, but usually require costly diffusion retraining. Pretrained diffusion models, however, already exhibit a structured, lower-dimensional latent space; thus, a simpler idea is to expand the latent dimensionality while preserving this structure. We therefore propose \textbf{D}etail-\textbf{A}ligned VAE, which increases the compression ratio of a pretrained VAE with only lightweight adaptation of the pretrained diffusion backbone. DA-VAE uses an explicit latent layout: the first $C$ channels come directly from the pretrained VAE at a base resolution, while an additional $D$ channels encode higher-resolution details. A simple detail-alignment mechanism encourages the expanded latent space to retain the structure of the original one. With a warm-start fine-tuning strategy, our method enables $1024 \times 1024$ image generation with Stable Diffusion 3.5 using only $32 \times 32$ tokens, $4\times$ fewer than the original model, within 5 H100-days. It further unlocks $2048 \times 2048$ generation with SD3.5, achieving a $6\times$ speedup while preserving image quality. We also validate the method and its design choices quantitatively on ImageNet.
Problem

Research questions and friction points this paper is trying to address.

latent compression
diffusion models
token reduction
high-resolution generation
structured latent space
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent compression
detail alignment
diffusion models
VAE adaptation
high-resolution generation
🔎 Similar Papers
No similar papers found.