H3AE: High Compression, High Speed, and High Quality AutoEncoder for Video Diffusion Models

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video diffusion models suffer from a fundamental trade-off among compression ratio, real-time decoding latency, and reconstruction fidelity; discriminative losses (e.g., GAN or LPIPS) fail to consistently improve performance under large-scale training. To address this, we propose a unified, efficient autoencoder (AE) framework tailored for video diffusion. Our method introduces a lightweight network architecture with optimized computational distribution across spatial-temporal dimensions and a novel discriminator-free, hyperparameter-free latent-space consistency loss. Notably, it achieves the first mobile-device-real-time decoding of ultra-high-ratio (>100×) video compression. Moreover, it unifies plain AE and image-to-video (I2V) VAE designs within a single architecture. Experiments demonstrate that our AE maintains sub-millisecond decoding latency while improving compression ratio by 2.3× and reducing LPIPS by 37% over state-of-the-art methods—significantly enhancing both reconstruction quality and inference efficiency. Crucially, it enables high-fidelity, computationally efficient text-to-video generation when integrated with DiT backbones.

Technology Category

Application Category

📝 Abstract
Autoencoder (AE) is the key to the success of latent diffusion models for image and video generation, reducing the denoising resolution and improving efficiency. However, the power of AE has long been underexplored in terms of network design, compression ratio, and training strategy. In this work, we systematically examine the architecture design choices and optimize the computation distribution to obtain a series of efficient and high-compression video AEs that can decode in real time on mobile devices. We also unify the design of plain Autoencoder and image-conditioned I2V VAE, achieving multifunctionality in a single network. In addition, we find that the widely adopted discriminative losses, i.e., GAN, LPIPS, and DWT losses, provide no significant improvements when training AEs at scale. We propose a novel latent consistency loss that does not require complicated discriminator design or hyperparameter tuning, but provides stable improvements in reconstruction quality. Our AE achieves an ultra-high compression ratio and real-time decoding speed on mobile while outperforming prior art in terms of reconstruction metrics by a large margin. We finally validate our AE by training a DiT on its latent space and demonstrate fast, high-quality text-to-video generation capability.
Problem

Research questions and friction points this paper is trying to address.

Optimize video AutoEncoder design for high compression and speed
Unify plain AutoEncoder and image-conditioned VAE into one network
Improve reconstruction quality with novel latent consistency loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient high-compression video AutoEncoder design
Unified plain and image-conditioned VAE network
Novel latent consistency loss for quality
🔎 Similar Papers
No similar papers found.