VTBench: Evaluating Visual Tokenizers for Autoregressive Image Generation

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art autoregressive image generation models are bottlenecked by the quality of visual tokenizers (VTs), where mainstream discrete VTs—despite their widespread use—exhibit substantially inferior performance over continuous VAEs in image reconstruction fidelity, fine-grained structural preservation, and text retention, yet lack dedicated evaluation benchmarks. Method: We introduce VTBench, the first VT-specific benchmark, which decouples VT evaluation into three core tasks: reconstruction fidelity, fine-grained structural preservation, and text readability. It employs multi-metric quantification (LPIPS, CLIP-Score, OCR accuracy), cross-architectural comparison (VQ-VAE, DALL·E tokenizer, SD-VAE), and GPT-4o-based mechanistic analysis. Contribution/Results: Our evaluation reveals systematic failures in discrete VTs—including texture loss, character distortion, and object deformation—while empirically confirming the intrinsic superiority of continuous VAEs in preserving spatial structure and semantic details. We open-source VTBench and associated code to advance standardized VT development.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) models have recently shown strong performance in image generation, where a critical component is the visual tokenizer (VT) that maps continuous pixel inputs to discrete token sequences. The quality of the VT largely defines the upper bound of AR model performance. However, current discrete VTs fall significantly behind continuous variational autoencoders (VAEs), leading to degraded image reconstructions and poor preservation of details and text. Existing benchmarks focus on end-to-end generation quality, without isolating VT performance. To address this gap, we introduce VTBench, a comprehensive benchmark that systematically evaluates VTs across three core tasks: Image Reconstruction, Detail Preservation, and Text Preservation, and covers a diverse range of evaluation scenarios. We systematically assess state-of-the-art VTs using a set of metrics to evaluate the quality of reconstructed images. Our findings reveal that continuous VAEs produce superior visual representations compared to discrete VTs, particularly in retaining spatial structure and semantic detail. In contrast, the degraded representations produced by discrete VTs often lead to distorted reconstructions, loss of fine-grained textures, and failures in preserving text and object integrity. Furthermore, we conduct experiments on GPT-4o image generation and discuss its potential AR nature, offering new insights into the role of visual tokenization. We release our benchmark and codebase publicly to support further research and call on the community to develop strong, general-purpose open-source VTs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating visual tokenizers' impact on autoregressive image generation quality
Assessing discrete vs continuous VTs in image reconstruction and detail preservation
Benchmarking VT performance in text and spatial structure retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VTBench for systematic VT evaluation
Compares continuous VAEs vs discrete VTs
Assesses GPT-4o's AR image generation potential