π€ AI Summary
Existing vector quantization methods suffer from premature discretization, which hinders the encoderβs ability to fully learn the underlying data manifold and limits representational capacity. To address this, this work proposes Progressive Vector Quantization (ProVQ), which, for the first time, formulates the quantization process as a learnable curriculum schedule. By employing a smooth annealing strategy that transitions gradually from continuous to discrete representations, ProVQ dynamically modulates quantization difficulty to guide the codebook toward progressively covering the data manifold. This approach significantly enhances alignment between the codebook and the data distribution, leading to improved reconstruction and generative performance on ImageNet-1K and ImageNet-100, and achieving state-of-the-art results on the StrutTokenBench protein structure tokenization benchmark.
π Abstract
Vector Quantization (VQ) has become the cornerstone of tokenization for many multimodal Large Language Models and diffusion synthesis. However, existing VQ paradigms suffer from a fundamental conflict: they enforce discretization before the encoder has captured the underlying data manifold. We term this phenomenon Premature Discretization. To resolve this, we propose Progressive Quantization (ProVQ), which incorporates the dynamics of quantization hardness as a fundamental yet previously overlooked axis in VQ training. By treating quantization as a curriculum that smoothly anneals from a continuous latent space to a discrete one, ProVQ effectively guides the codebook toward the well-expanded manifolds. Extensive experimental results demonstrate the broad effectiveness of ProVQ across diverse modalities. We report improved reconstruction and generative performance on the ImageNet-1K and ImageNet-100 benchmarks, highlighting the ProVQ's boost for generative modeling. Furthermore, ProVQ proves highly effective for modeling complex biological sequences, establishing a new performance ceiling for protein structure tokenization on the StrutTokenBench leaderboard.