Product-Quantised Image Representation for High-Quality Image Synthesis

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low reconstruction fidelity and inefficiency of latent representations in high-fidelity image generation, this paper proposes PQGAN—the first generative framework integrating product quantization (PQ) into latent variable encoding within the VQGAN architecture. PQGAN jointly leverages subspace decomposition and codebook optimization to enable efficient, high-fidelity quantization of high-dimensional latent spaces. We empirically identify an inverse relationship between embedding dimensionality and the relative performance of vector quantization versus product quantization, thereby establishing principled guidelines for hyperparameter selection. Experiments on ImageNet demonstrate that PQGAN achieves 37 dB PSNR—improving upon baseline VQGAN by 10 dB—while reducing FID, LPIPS, and CMMD by up to 96%. Moreover, PQGAN supports either doubling output resolution or accelerating generation, and seamlessly integrates with diffusion models.

Technology Category

Application Category

📝 Abstract
Product quantisation (PQ) is a classical method for scalable vector encoding, yet it has seen limited usage for latent representations in high-fidelity image generation. In this work, we introduce PQGAN, a quantised image autoencoder that integrates PQ into the well-known vector quantisation (VQ) framework of VQGAN. PQGAN achieves a noticeable improvement over state-of-the-art methods in terms of reconstruction performance, including both quantisation methods and their continuous counterparts. We achieve a PSNR score of 37dB, where prior work achieves 27dB, and are able to reduce the FID, LPIPS, and CMMD score by up to 96%. Our key to success is a thorough analysis of the interaction between codebook size, embedding dimensionality, and subspace factorisation, with vector and scalar quantisation as special cases. We obtain novel findings, such that the performance of VQ and PQ behaves in opposite ways when scaling the embedding dimension. Furthermore, our analysis shows performance trends for PQ that help guide optimal hyperparameter selection. Finally, we demonstrate that PQGAN can be seamlessly integrated into pre-trained diffusion models. This enables either a significantly faster and more compute-efficient generation, or a doubling of the output resolution at no additional cost, positioning PQ as a strong extension for discrete latent representation in image synthesis.
Problem

Research questions and friction points this paper is trying to address.

Enhancing image reconstruction quality using product quantization techniques
Optimizing quantized autoencoder performance through hyperparameter interaction analysis
Enabling higher-resolution image synthesis with improved computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates product quantization into VQGAN framework
Achieves 37dB PSNR with 96% metric improvements
Enables faster generation or doubled resolution
🔎 Similar Papers
No similar papers found.
D
Denis Zavadski
Heidelberg University ELIZA
N
Nikita Philip Tatsch
Heidelberg University ELIZA
Carsten Rother
Carsten Rother
Professor Uni Heidelberg / Germany
computer visionmachine learning