EvoTok: A Unified Image Tokenizer via Residual Latent Evolution for Visual Understanding and Generation

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision models struggle to jointly capture high-level semantics required for understanding and pixel-level details essential for generation, resulting in a representational granularity gap. This work proposes EvoTok, which, for the first time, models visual representations as residual evolution trajectories within a shared latent space. By employing cascaded residual vector quantization (RVQ), EvoTok generates multi-granularity token sequences that progressively unify low-level details and high-level semantics. Trained on only 13 million images, the method achieves a remarkable 0.43 rFID on ImageNet-1K, outperforms prior approaches on 7 out of 9 understanding benchmarks, and demonstrates strong performance on generative tasks in both GenEval and GenAI-Bench, effectively balancing the dual demands of perception and synthesis.

Technology Category

Application Category

📝 Abstract
The development of unified multimodal large language models (MLLMs) is fundamentally challenged by the granularity gap between visual understanding and generation: understanding requires high-level semantic abstractions, while image generation demands fine-grained pixel-level representations. Existing approaches usually enforce the two supervision on the same set of representation or decouple these two supervision on separate feature spaces, leading to interference and inconsistency, respectively. In this work, we propose EvoTok, a unified image tokenizer that reconciles these requirements through a residual evolution process within a shared latent space. Instead of maintaining separate token spaces for pixels and semantics, EvoTok encodes an image into a cascaded sequence of residual tokens via residual vector quantization. This residual sequence forms an evolution trajectory where earlier stages capture low-level details and deeper stages progressively transition toward high-level semantic representations. Despite being trained on a relatively modest dataset of 13M images, far smaller than the billion-scale datasets used by many previous unified tokenizers, EvoTok achieves a strong reconstruction quality of 0.43 rFID on ImageNet-1K at 256x256 resolution. When integrated with a large language model, EvoTok shows promising performance across 7 out of 9 visual understanding benchmarks, and remarkable results on image generation benchmarks such as GenEval and GenAI-Bench. These results demonstrate that modeling visual representations as an evolving trajectory provides an effective and principled solution for unifying visual understanding and generation.
Problem

Research questions and friction points this paper is trying to address.

granularity gap
visual understanding
image generation
unified representation
multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

residual latent evolution
unified image tokenizer
residual vector quantization
multimodal large language models
visual representation trajectory
🔎 Similar Papers
No similar papers found.