Scalable Training for Vector-Quantized Networks with 100% Codebook Utilization

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vector quantization (VQ) training suffers from poor reconstruction quality and low codebook utilization due to biased straight-through estimator (STE) gradients, gradient sparsity, and delayed codebook updates. To address these issues, this paper proposes VQBridge—a scalable end-to-end trainable framework based on a “compress–process–restore” architecture. VQBridge introduces a novel projection module grounded in functional mapping, refines the STE for more accurate gradient approximation, and integrates a learnable annealing strategy to jointly optimize quantization and reconstruction. This constitutes a new framework for flexible vector quantization (FVQ). Notably, VQBridge achieves 100% codebook utilization for the first time, supports ultra-large codebooks (up to 262K entries), and is compatible with diverse VQ variants. When integrated into LlamaGen, it attains state-of-the-art reconstruction quality, outperforming VAR and DiT by 0.5 and 0.2 in rFID, respectively; moreover, performance improves consistently with increasing codebook size.

Technology Category

Application Category

📝 Abstract
Vector quantization (VQ) is a key component in discrete tokenizers for image generation, but its training is often unstable due to straight-through estimation bias, one-step-behind updates, and sparse codebook gradients, which lead to suboptimal reconstruction performance and low codebook usage. In this work, we analyze these fundamental challenges and provide a simple yet effective solution. To maintain high codebook usage in VQ networks (VQN) during learning annealing and codebook size expansion, we propose VQBridge, a robust, scalable, and efficient projector based on the map function method. VQBridge optimizes code vectors through a compress-process-recover pipeline, enabling stable and effective codebook training. By combining VQBridge with learning annealing, our VQN achieves full (100%) codebook usage across diverse codebook configurations, which we refer to as FVQ (FullVQ). Through extensive experiments, we demonstrate that FVQ is effective, scalable, and generalizable: it attains 100% codebook usage even with a 262k-codebook, achieves state-of-the-art reconstruction performance, consistently improves with larger codebooks, higher vector channels, or longer training, and remains effective across different VQ variants. Moreover, when integrated with LlamaGen, FVQ significantly enhances image generation performance, surpassing visual autoregressive models (VAR) by 0.5 and diffusion models (DiT) by 0.2 rFID, highlighting the importance of high-quality tokenizers for strong autoregressive image generation.
Problem

Research questions and friction points this paper is trying to address.

Addresses unstable vector quantization training with straight-through estimation bias
Solves low codebook usage and suboptimal reconstruction in VQ networks
Enables scalable training for 100% codebook utilization across configurations
Innovation

Methods, ideas, or system contributions that make the work stand out.

VQBridge projector enables stable codebook training
Compress-process-recover pipeline optimizes code vectors
Achieves 100% codebook usage across configurations
🔎 Similar Papers
No similar papers found.
Y
Yifan Chang
CASIA, UCAS, Luoyang Institute for Robot and Intelligent Equipment
Jie Qin
Jie Qin
Professor, Nanjing University of Aeronautics and Astronautics
Computer VisionMachine LearningPattern Recognition
Limeng Qiao
Limeng Qiao
Meituan Inc.
Computer Vision
X
Xiaofeng Wang
GigaAI
Z
Zheng Zhu
GigaAI
L
Lin Ma
Meituan
X
Xingang Wang
CASIA