Quantize-then-Rectify: Efficient VQ-VAE Training

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively high computational cost of training high-compression-rate VQ-VAEs—often requiring thousands of GPU hours—this paper proposes Quantize-then-Rectify (ReVQ), a novel framework. ReVQ first leverages a pretrained VAE to extract continuous latent representations, then applies grouped channel quantization to substantially expand codebook capacity, and finally employs a lightweight differentiable rectifier module to suppress quantization errors. This design decouples quantization from reconstruction and replaces end-to-end joint optimization with differentiable post-quantization correction, achieving high-fidelity reconstruction at minimal overhead: on ImageNet, it attains an rFID of 1.06 at ≤512 tokens per image, while training completes in only ~22 hours on a single NVIDIA RTX 4090—two orders of magnitude faster than state-of-the-art methods. The core contribution lies in this efficiency-fidelity trade-off, enabled by modular, differentiable error correction without sacrificing reconstruction quality.

Technology Category

Application Category

📝 Abstract
Visual tokenizers are pivotal in multimodal large models, acting as bridges between continuous inputs and discrete tokens. Nevertheless, training high-compression-rate VQ-VAEs remains computationally demanding, often necessitating thousands of GPU hours. This work demonstrates that a pre-trained VAE can be efficiently transformed into a VQ-VAE by controlling quantization noise within the VAE's tolerance threshold. We present extbf{Quantize-then-Rectify (ReVQ)}, a framework leveraging pre-trained VAEs to enable rapid VQ-VAE training with minimal computational overhead. By integrating extbf{channel multi-group quantization} to enlarge codebook capacity and a extbf{post rectifier} to mitigate quantization errors, ReVQ compresses ImageNet images into at most 512 tokens while sustaining competitive reconstruction quality (rFID = 1.06). Significantly, ReVQ reduces training costs by over two orders of magnitude relative to state-of-the-art approaches: ReVQ finishes full training on a single NVIDIA 4090 in approximately 22 hours, whereas comparable methods require 4.5 days on 32 A100 GPUs. Experimental results show that ReVQ achieves superior efficiency-reconstruction trade-offs.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of VQ-VAE training
Transforming pre-trained VAE into efficient VQ-VAE
Maintaining reconstruction quality with high compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transform pre-trained VAE into VQ-VAE efficiently
Use channel multi-group quantization for larger codebook
Apply post rectifier to reduce quantization errors
🔎 Similar Papers
No similar papers found.
Borui Zhang
Borui Zhang
Ph.D. student, Tsinghua University
Computer VisionMachine LearningMetric LearningExplainable AI
Q
Qihang Rao
Department of Automation, Tsinghua University, China
Wenzhao Zheng
Wenzhao Zheng
EECS, University of California, Berkeley
Large ModelsEmbodied AgentsAutonomous Driving
J
Jie Zhou
Department of Automation, Tsinghua University, China
J
Jiwen Lu
Department of Automation, Tsinghua University, China