Generative Latent Coding for Ultra-Low Bitrate Image Compression

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing perceptual quality and reconstruction fidelity in ultra-low-bitrate image compression (<0.04 bpp), where conventional pixel-domain transform coding falls short, this paper proposes the first generative latent-space image compression framework. We build a semantically rich and perception-aligned sparse latent representation using VQ-VAE, and perform transform coding directly in this latent space. A classification-based hyperprior module is designed to significantly reduce hyperprior bitrate overhead, while a codebook prediction supervision mechanism is introduced to enhance semantic consistency. On the CLIC2020 dataset, our method achieves comparable FID to MS-ILLM at 45% lower bitrates; for face images compressed below 0.01 bpp, it maintains high visual quality. Moreover, the framework natively supports downstream tasks such as image inpainting and style transfer.

Technology Category

Application Category

📝 Abstract
Most existing image compression approaches perform transform coding in the pixel space to reduce its spatial redundancy. However, they encounter difficulties in achieving both high-realism and high-fidelity at low bitrate, as the pixel-space distortion may not align with human perception. To address this issue, we introduce a Generative Latent Coding (GLC) architecture, which performs transform coding in the latent space of a generative vector-quantized variational auto-encoder (VQ-VAE), instead of in the pixel space. The generative latent space is characterized by greater sparsity, richer semantic and better alignment with human perception, rendering it advantageous for achieving high-realism and high-fidelity compression. Additionally, we introduce a categorical hyper module to reduce the bit cost of hyper-information, and a code-prediction-based supervision to enhance the semantic consistency. Experiments demonstrate that our GLC maintains high visual quality with less than 0.04 bpp on natural images and less than 0.01 bpp on facial images. On the CLIC2020 test set, we achieve the same FID as MS-ILLM with 45% fewer bits. Furthermore, the powerful generative latent space enables various applications built on our GLC pipeline, such as image restoration and style transfer. The code is available at https://github.com/jzyustc/GLC.
Problem

Research questions and friction points this paper is trying to address.

Achieves high-realism and high-fidelity image compression at ultra-low bitrates
Reduces pixel-space distortion by using generative latent coding
Enhances semantic consistency and reduces hyper-information bit cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transform coding in generative latent space instead of pixel space
Categorical hyper module reduces bit cost of hyper-information
Code-prediction supervision enhances semantic consistency in compression
🔎 Similar Papers
No similar papers found.