Semantic-Aware Prefix Learning for Token-Efficient Image Generation

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual tokenizers predominantly rely on pixel-level reconstruction objectives, which struggle to preserve high-level semantics under low token budgets, thereby limiting generative fidelity. To address this, this work proposes SMAP—a semantic-aware prefix tokenizer that uniquely integrates class-conditional semantics as a core component of the tokenization process rather than a supplementary regularizer. SMAP injects semantic priors via query-based 1D tokenization and incorporates a dynamic tail-token dropping mechanism to strengthen the coupling between semantic content and compact prefix representations. Coupled with a newly designed causal autoregressive–diffusion hybrid generator, CARD, the approach achieves significant improvements in both reconstruction and generation quality on ImageNet, demonstrating high-fidelity image synthesis under both discrete and continuous tokenization settings at low token budgets.

Technology Category

Application Category

📝 Abstract
Visual tokenizers play a central role in latent image generation by bridging high-dimensional images and tractable generative modeling. However, most existing tokenizers are still trained with reconstruction-dominated objectives, which often yield latent representations that are only weakly grounded in high-level semantics. Recent approaches improve semantic alignment, but typically treat semantic signals as auxiliary regularization rather than making them functionally necessary for representation learning. We propose SMAP, a SeMantic-Aware Prefix tokenizer that injects class-level semantic conditions into a query-based 1D tokenization framework. To make semantics indispensable during training, SMAP introduces a tail token dropping strategy, which forces semantic conditions and early latent prefixes to bear increasing responsibility under progressively reduced token budgets. To verify that the resulting latent space is useful for generation rather than reconstruction alone, we further introduce CARD, a hybrid Causal AutoRegressive--Diffusion generator. Extensive experiments on ImageNet show that SMAP consistently improves reconstruction quality across discrete and continuous tokenization settings, and that its semantically grounded latent space yields strong downstream generation performance under compact token budgets.
Problem

Research questions and friction points this paper is trying to address.

visual tokenization
semantic alignment
latent representation
image generation
token efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic-aware tokenization
prefix learning
tail token dropping
hybrid autoregressive-diffusion generation
token-efficient image generation
🔎 Similar Papers
No similar papers found.