MacTok: Robust Continuous Tokenization for Image Generation

πŸ“… 2026-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing continuous image tokenizers suffer from posterior collapse under extremely low token budgets, failing to preserve meaningful visual information. This work proposes MacTok, a 1D continuous tokenizer that integrates stochastic masking with DINO-guided semantic masking. For the first time, it incorporates semantic-aware masking and a global–local representation alignment mechanism into a variational framework, enabling the learning of compact yet robust latent representations from incomplete visual evidence. The approach effectively mitigates posterior collapse, achieving high-fidelity image generation on ImageNet with only 64–128 tokens: it attains a gFID of 1.44 at 256Γ—256 resolution and a state-of-the-art gFID of 1.52 at 512Γ—512, reducing token usage by up to 64Γ— compared to existing methods.
πŸ“ Abstract
Continuous image tokenizers enable efficient visual generation, and those based on variational frameworks can learn smooth, structured latent representations through KL regularization. Yet this often leads to posterior collapse when using fewer tokens, where the encoder fails to encode informative features into the compressed latent space. To address this, we introduce \textbf{MacTok}, a \textbf{M}asked \textbf{A}ugmenting 1D \textbf{C}ontinuous \textbf{Tok}enizer that leverages image masking and representation alignment to prevent collapse while learning compact and robust representations. MacTok applies both random masking to regularize latent learning and DINO-guided semantic masking to emphasize informative regions in images, forcing the model to encode robust semantics from incomplete visual evidence. Combined with global and local representation alignment, MacTok preserves rich discriminative information in a highly compressed 1D latent space, requiring only 64 or 128 tokens. On ImageNet, MacTok achieves a competitive gFID of 1.44 at 256$\times$256 and a state-of-the-art 1.52 at 512$\times$512 with SiT-XL, while reducing token usage by up to 64$\times$. These results confirm that masking and semantic guidance together prevent posterior collapse and achieve efficient, high-fidelity tokenization.
Problem

Research questions and friction points this paper is trying to address.

posterior collapse
continuous tokenization
image generation
latent representation
variational frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous tokenization
posterior collapse
masked augmentation
semantic guidance
representation alignment
H
Hengyu Zeng
Fudan University
X
Xin Gao
Fudan University
Guanghao Li
Guanghao Li
Fudan University
Graphics
Y
Yuxiang Yan
Fudan University
J
Jiaoyang Ruan
Fudan University
J
Junpeng Ma
Fudan University
H
Haoyu Albert Wang
Fudan University
Jian Pu
Jian Pu
Institute of Science and Technology for Brain-inspired Intelligence, Fudan University
Autonomous SystemsComputer VisionMachine Learning