🤖 AI Summary
Existing vector quantization (VQ) methods suffer from non-smooth latent spaces, weak alignment between continuous and discrete representations, and suboptimal codebook utilization—leading to unstable codebook learning and degraded reconstruction and generation performance. To address these issues, we propose VAEVQ, a framework that deeply integrates variational autoencoders (VAEs) with vector quantization. Our approach introduces three key innovations: (i) a variational latent quantization mechanism that regularizes the quantized latent distribution; (ii) adaptive feature alignment modulation to enforce geometric and statistical consistency between continuous and discrete representations; and (iii) distributional consistency regularization to improve codebook utilization and stability. Experiments on two standard benchmarks demonstrate that VAEVQ achieves superior image reconstruction fidelity and outperforms state-of-the-art methods on downstream generative tasks, validating its effectiveness in enhancing latent space smoothness, representation alignment, and codebook efficiency.
📝 Abstract
Vector quantization (VQ) transforms continuous image features into discrete representations, providing compressed, tokenized inputs for generative models. However, VQ-based frameworks suffer from several issues, such as non-smooth latent spaces, weak alignment between representations before and after quantization, and poor coherence between the continuous and discrete domains. These issues lead to unstable codeword learning and underutilized codebooks, ultimately degrading the performance of both reconstruction and downstream generation tasks. To this end, we propose VAEVQ, which comprises three key components: (1) Variational Latent Quantization (VLQ), replacing the AE with a VAE for quantization to leverage its structured and smooth latent space, thereby facilitating more effective codeword activation; (2) Representation Coherence Strategy (RCS), adaptively modulating the alignment strength between pre- and post-quantization features to enhance consistency and prevent overfitting to noise; and (3) Distribution Consistency Regularization (DCR), aligning the entire codebook distribution with the continuous latent distribution to improve utilization. Extensive experiments on two benchmark datasets demonstrate that VAEVQ outperforms state-of-the-art methods.