Generative Latent Video Compression

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenge of rate-distortion trade-off optimization and inter-frame flicker artifacts in perceptual video compression, this paper proposes a generative latent representation compression framework. Methodologically: (1) a pre-trained continuous tokenizer maps video frames into a perceptually aligned latent space, decoupling perceptual constraints; (2) a unified intra-/inter-frame coding architecture with a recurrent memory mechanism is introduced to enhance temporal consistency; (3) spatiotemporal context modeling is integrated with an improved latent-domain encoder-decoder design. Evaluated on multiple benchmarks, the method achieves state-of-the-art performance in DISTS and LPIPS metrics at significantly lower bitrates. A user study confirms that it attains comparable visual quality to current top-performing neural video codecs at nearly half the bitrate, demonstrating substantial improvements in both perceptual coherence and compression efficiency.

Technology Category

Application Category

📝 Abstract
Perceptual optimization is widely recognized as essential for neural compression, yet balancing the rate-distortion-perception tradeoff remains challenging. This difficulty is especially pronounced in video compression, where frame-wise quality fluctuations often cause perceptually optimized neural video codecs to suffer from flickering artifacts. In this paper, inspired by the success of latent generative models, we present Generative Latent Video Compression (GLVC), an effective framework for perceptual video compression. GLVC employs a pretrained continuous tokenizer to project video frames into a perceptually aligned latent space, thereby offloading perceptual constraints from the rate-distortion optimization. We redesign the codec architecture explicitly for the latent domain, drawing on extensive insights from prior neural video codecs, and further equip it with innovations such as unified intra/inter coding and a recurrent memory mechanism. Experimental results across multiple benchmarks show that GLVC achieves state-of-the-art performance in terms of DISTS and LPIPS metrics. Notably, our user study confirms GLVC rivals the latest neural video codecs at nearly half their rate while maintaining stable temporal coherence, marking a step toward practical perceptual video compression.
Problem

Research questions and friction points this paper is trying to address.

Balancing rate-distortion-perception tradeoff in video compression
Reducing flickering artifacts in neural video codecs
Achieving temporal coherence while maintaining perceptual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pretrained tokenizer for perceptual latent space
Implements unified intra/inter coding architecture
Incorporates recurrent memory for temporal coherence
🔎 Similar Papers
No similar papers found.
Zongyu Guo
Zongyu Guo
Microsoft Research
Neural CompressionGenerative ModelingProbabilistic Models
Zhaoyang Jia
Zhaoyang Jia
University of Science and Technology of China
Video compressiondigital watermarking
J
Jiahao Li
Microsoft Research Asia
X
Xiaoyi Zhang
Microsoft Research Asia
B
Bin Li
Microsoft Research Asia
Y
Yan Lu
Microsoft Research Asia