🤖 AI Summary
Autoregressive visual generation faces an inherent tension between discrete and continuous token representations: discrete tokens enable simple modeling but suffer from reconstruction distortion and unstable tokenizer training, whereas continuous tokens preserve fidelity at the cost of complex probabilistic modeling. This paper proposes TokenBridge, the first framework to synergistically integrate the advantages of both paradigms. Its core innovation is a decoupled quantization mechanism: post-training, dimension-wise quantization losslessly maps continuous features to discrete tokens—bypassing end-to-end tokenizer optimization—and a lightweight autoregressive prediction head performs standard classification over an ultra-large discrete token space. Experiments demonstrate that TokenBridge achieves reconstruction and generation quality on par with continuous-token baselines, while substantially simplifying training and inference: it requires only cross-entropy loss for efficient optimization, eliminating specialized distribution modeling and tokenizer fine-tuning.
📝 Abstract
Autoregressive visual generation models typically rely on tokenizers to compress images into tokens that can be predicted sequentially. A fundamental dilemma exists in token representation: discrete tokens enable straightforward modeling with standard cross-entropy loss, but suffer from information loss and tokenizer training instability; continuous tokens better preserve visual details, but require complex distribution modeling, complicating the generation pipeline. In this paper, we propose TokenBridge, which bridges this gap by maintaining the strong representation capacity of continuous tokens while preserving the modeling simplicity of discrete tokens. To achieve this, we decouple discretization from the tokenizer training process through post-training quantization that directly obtains discrete tokens from continuous representations. Specifically, we introduce a dimension-wise quantization strategy that independently discretizes each feature dimension, paired with a lightweight autoregressive prediction mechanism that efficiently model the resulting large token space. Extensive experiments show that our approach achieves reconstruction and generation quality on par with continuous methods while using standard categorical prediction. This work demonstrates that bridging discrete and continuous paradigms can effectively harness the strengths of both approaches, providing a promising direction for high-quality visual generation with simple autoregressive modeling. Project page: https://yuqingwang1029.github.io/TokenBridge.