🤖 AI Summary
To address the prohibitively high computational cost of training high-compression-rate VQ-VAEs—often requiring thousands of GPU hours—this paper proposes Quantize-then-Rectify (ReVQ), a novel framework. ReVQ first leverages a pretrained VAE to extract continuous latent representations, then applies grouped channel quantization to substantially expand codebook capacity, and finally employs a lightweight differentiable rectifier module to suppress quantization errors. This design decouples quantization from reconstruction and replaces end-to-end joint optimization with differentiable post-quantization correction, achieving high-fidelity reconstruction at minimal overhead: on ImageNet, it attains an rFID of 1.06 at ≤512 tokens per image, while training completes in only ~22 hours on a single NVIDIA RTX 4090—two orders of magnitude faster than state-of-the-art methods. The core contribution lies in this efficiency-fidelity trade-off, enabled by modular, differentiable error correction without sacrificing reconstruction quality.
📝 Abstract
Visual tokenizers are pivotal in multimodal large models, acting as bridges between continuous inputs and discrete tokens. Nevertheless, training high-compression-rate VQ-VAEs remains computationally demanding, often necessitating thousands of GPU hours. This work demonstrates that a pre-trained VAE can be efficiently transformed into a VQ-VAE by controlling quantization noise within the VAE's tolerance threshold. We present extbf{Quantize-then-Rectify (ReVQ)}, a framework leveraging pre-trained VAEs to enable rapid VQ-VAE training with minimal computational overhead. By integrating extbf{channel multi-group quantization} to enlarge codebook capacity and a extbf{post rectifier} to mitigate quantization errors, ReVQ compresses ImageNet images into at most 512 tokens while sustaining competitive reconstruction quality (rFID = 1.06). Significantly, ReVQ reduces training costs by over two orders of magnitude relative to state-of-the-art approaches: ReVQ finishes full training on a single NVIDIA 4090 in approximately 22 hours, whereas comparable methods require 4.5 days on 32 A100 GPUs. Experimental results show that ReVQ achieves superior efficiency-reconstruction trade-offs.