🤖 AI Summary
To address the low reconstruction fidelity and inefficiency of latent representations in high-fidelity image generation, this paper proposes PQGAN—the first generative framework integrating product quantization (PQ) into latent variable encoding within the VQGAN architecture. PQGAN jointly leverages subspace decomposition and codebook optimization to enable efficient, high-fidelity quantization of high-dimensional latent spaces. We empirically identify an inverse relationship between embedding dimensionality and the relative performance of vector quantization versus product quantization, thereby establishing principled guidelines for hyperparameter selection. Experiments on ImageNet demonstrate that PQGAN achieves 37 dB PSNR—improving upon baseline VQGAN by 10 dB—while reducing FID, LPIPS, and CMMD by up to 96%. Moreover, PQGAN supports either doubling output resolution or accelerating generation, and seamlessly integrates with diffusion models.
📝 Abstract
Product quantisation (PQ) is a classical method for scalable vector encoding, yet it has seen limited usage for latent representations in high-fidelity image generation. In this work, we introduce PQGAN, a quantised image autoencoder that integrates PQ into the well-known vector quantisation (VQ) framework of VQGAN. PQGAN achieves a noticeable improvement over state-of-the-art methods in terms of reconstruction performance, including both quantisation methods and their continuous counterparts. We achieve a PSNR score of 37dB, where prior work achieves 27dB, and are able to reduce the FID, LPIPS, and CMMD score by up to 96%. Our key to success is a thorough analysis of the interaction between codebook size, embedding dimensionality, and subspace factorisation, with vector and scalar quantisation as special cases. We obtain novel findings, such that the performance of VQ and PQ behaves in opposite ways when scaling the embedding dimension. Furthermore, our analysis shows performance trends for PQ that help guide optimal hyperparameter selection. Finally, we demonstrate that PQGAN can be seamlessly integrated into pre-trained diffusion models. This enables either a significantly faster and more compute-efficient generation, or a doubling of the output resolution at no additional cost, positioning PQ as a strong extension for discrete latent representation in image synthesis.