🤖 AI Summary
To address the inefficiency and parameter redundancy of encoders in latent diffusion models (LDMs), this paper introduces LiteVAE—the first lightweight VAE family that deeply integrates learnable two-dimensional discrete wavelet transform (DWT) into the VAE architecture. LiteVAE leverages DWT-driven multi-scale latent representations, gradient-stabilized training, and a reconstruction-optimized decoder design. It achieves state-of-the-art reconstruction fidelity—outperforming prior methods across rFID, LPIPS, PSNR, and SSIM—while reducing encoder parameters by 6×, accelerating training, and significantly lowering GPU memory consumption. The core innovation lies in the native incorporation of learnable DWT into the VAE framework, uniquely balancing high-fidelity reconstruction with extreme computational compression. This work establishes a new paradigm for efficient, high-resolution image generation.
📝 Abstract
Advances in latent diffusion models (LDMs) have revolutionized high-resolution image generation, but the design space of the autoencoder that is central to these systems remains underexplored. In this paper, we introduce LiteVAE, a family of autoencoders for LDMs that leverage the 2D discrete wavelet transform to enhance scalability and computational efficiency over standard variational autoencoders (VAEs) with no sacrifice in output quality. We also investigate the training methodologies and the decoder architecture of LiteVAE and propose several enhancements that improve the training dynamics and reconstruction quality. Our base LiteVAE model matches the quality of the established VAEs in current LDMs with a six-fold reduction in encoder parameters, leading to faster training and lower GPU memory requirements, while our larger model outperforms VAEs of comparable complexity across all evaluated metrics (rFID, LPIPS, PSNR, and SSIM).