🤖 AI Summary
This work proposes SemTok, a semantic-driven one-dimensional tokenizer that addresses the limitations of existing vision tokenizers, which often rely on fixed 2D grids and prioritize pixel-level reconstruction at the expense of compact global semantics. SemTok compresses images into high-level discrete semantic tokens and introduces a masked autoregressive generative framework. Its key innovations include a 2D-to-1D semantic tokenization strategy, a semantic alignment constraint mechanism, and a two-stage generative training paradigm. Experimental results demonstrate that SemTok achieves state-of-the-art performance in image reconstruction, delivering higher fidelity under extremely compact token representations and significantly enhancing downstream generative capabilities.
📝 Abstract
Visual generative models based on latent space have achieved great success, underscoring the significance of visual tokenization. Mapping images to latents boosts efficiency and enables multimodal alignment for scaling up in downstream tasks. Existing visual tokenizers primarily map images into fixed 2D spatial grids and focus on pixel-level restoration, which hinders the capture of representations with compact global semantics. To address these issues, we propose \textbf{SemTok}, a semantic one-dimensional tokenizer that compresses 2D images into 1D discrete tokens with high-level semantics. SemTok sets a new state-of-the-art in image reconstruction, achieving superior fidelity with a remarkably compact token representation. This is achieved via a synergistic framework with three key innovations: a 2D-to-1D tokenization scheme, a semantic alignment constraint, and a two-stage generative training strategy. Building on SemTok, we construct a masked autoregressive generation framework, which yields notable improvements in downstream image generation tasks. Experiments confirm the effectiveness of our semantic 1D tokenization. Our code will be open-sourced.