CODA: Repurposing Continuous VAEs for Discrete Tokenization

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete visual tokenizers suffer from three core challenges: training instability, low codebook utilization, and poor reconstruction fidelity—stemming from the entanglement of compression and discretization objectives. To address this, we propose a decoupled transfer-adaptation framework that efficiently converts a pretrained continuous VAE into a high-fidelity discrete tokenizer. Our approach preserves continuous representation capacity via feature distillation, enables controllable discretization through learnable vector quantization (VQ), and enhances reconstruction fidelity with a reconstruction-aware regularization term. Evaluated on ImageNet at 256×256 resolution, our method achieves 100% codebook utilization and exceptionally low rFID scores (0.43 at 8× compression; 1.34 at 16×), while requiring only 1/6 the training cost of VQGAN. This work significantly advances discrete visual modeling in terms of stability, efficiency, and reconstruction quality.

Technology Category

Application Category

📝 Abstract
Discrete visual tokenizers transform images into a sequence of tokens, enabling token-based visual generation akin to language models. However, this process is inherently challenging, as it requires both compressing visual signals into a compact representation and discretizing them into a fixed set of codes. Traditional discrete tokenizers typically learn the two tasks jointly, often leading to unstable training, low codebook utilization, and limited reconstruction quality. In this paper, we introduce extbf{CODA}( extbf{CO}ntinuous-to- extbf{D}iscrete extbf{A}daptation), a framework that decouples compression and discretization. Instead of training discrete tokenizers from scratch, CODA adapts off-the-shelf continuous VAEs -- already optimized for perceptual compression -- into discrete tokenizers via a carefully designed discretization process. By primarily focusing on discretization, CODA ensures stable and efficient training while retaining the strong visual fidelity of continuous VAEs. Empirically, with $mathbf{6 imes}$ less training budget than standard VQGAN, our approach achieves a remarkable codebook utilization of 100% and notable reconstruction FID (rFID) of $mathbf{0.43}$ and $mathbf{1.34}$ for $8 imes$ and $16 imes$ compression on ImageNet 256$ imes$ 256 benchmark.
Problem

Research questions and friction points this paper is trying to address.

Discrete tokenizers struggle with unstable training and low codebook utilization
Existing methods jointly learn compression and discretization, limiting reconstruction quality
CODA decouples compression and discretization to improve efficiency and fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples compression and discretization tasks
Adapts continuous VAEs into discrete tokenizers
Ensures stable training with high visual fidelity
🔎 Similar Papers
Z
Zeyu Liu
Tsinghua University
Zanlin Ni
Zanlin Ni
Tsinghua University
Computer VisionDeep Learning
Y
Yeguo Hua
Tsinghua University
X
Xin Deng
Renmin University
X
Xiao Ma
Lenovo Research, AI Lab
C
Cheng Zhong
Lenovo Research, AI Lab
G
Gao Huang
Tsinghua University