WF-VAE: Enhancing Video VAE by Wavelet-Driven Energy Flow for Latent Video Diffusion Model

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 13
Influential: 3
📄 PDF
🤖 AI Summary
To address the high computational overhead of video VAE encoding and latent-space discontinuities induced by block-wise inference in high-resolution, long-duration videos, this paper proposes WaveFlow-VAE—a wavelet-driven energy-flow video VAE. The method integrates multilevel discrete wavelet transform (DWT), variational autoencoding, and latent-space energy-flow modeling. Its core contributions are: (1) a novel multilevel wavelet decomposition scheme that guides low-frequency energy toward compact latent representations, enabling energy-efficient modeling; and (2) a causal caching mechanism ensuring temporal consistency and completeness of latent features during block-wise inference. Experiments demonstrate that WaveFlow-VAE outperforms state-of-the-art video VAEs in reconstruction quality (higher PSNR and lower LPIPS), achieves 2× higher throughput, and reduces GPU memory consumption by 4×, while maintaining superior visual fidelity.

Technology Category

Application Category

📝 Abstract
Video Variational Autoencoder (VAE) encodes videos into a low-dimensional latent space, becoming a key component of most Latent Video Diffusion Models (LVDMs) to reduce model training costs. However, as the resolution and duration of generated videos increase, the encoding cost of Video VAEs becomes a limiting bottleneck in training LVDMs. Moreover, the block-wise inference method adopted by most LVDMs can lead to discontinuities of latent space when processing long-duration videos. The key to addressing the computational bottleneck lies in decomposing videos into distinct components and efficiently encoding the critical information. Wavelet transform can decompose videos into multiple frequency-domain components and improve the efficiency significantly, we thus propose Wavelet Flow VAE (WF-VAE), an autoencoder that leverages multi-level wavelet transform to facilitate low-frequency energy flow into latent representation. Furthermore, we introduce a method called Causal Cache, which maintains the integrity of latent space during block-wise inference. Compared to state-of-the-art video VAEs, WF-VAE demonstrates superior performance in both PSNR and LPIPS metrics, achieving 2x higher throughput and 4x lower memory consumption while maintaining competitive reconstruction quality. Our code and models are available at https://github.com/PKU-YuanGroup/WF-VAE.
Problem

Research questions and friction points this paper is trying to address.

Reducing Video VAE encoding costs for high-resolution, long-duration videos
Addressing latent space discontinuities in block-wise video inference
Improving efficiency via wavelet-driven energy flow in video encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet transform decomposes videos efficiently
Causal Cache maintains latent space integrity
WF-VAE enhances throughput and reduces memory
🔎 Similar Papers
No similar papers found.
Z
Zongjian Li
Peking University, Rabbitpre Intelligence
B
Bin Lin
Peking University, Rabbitpre Intelligence
Y
Yang Ye
Peking University, Rabbitpre Intelligence
Liuhan Chen
Liuhan Chen
Peking University
Image and Video GenerationImage and Video Processing
Xinhua Cheng
Xinhua Cheng
Peking University
computer vision
S
Shenghai Yuan
Peking University, Rabbitpre Intelligence
Li Yuan
Li Yuan
Research Associate, University of Science & Technology of China (USTC)
Antibiotic resistanceWastewater treatmentEnvironmental bioremediationAnaerobic digestionFate of organic pollutants