CaTok: Taming Mean Flows for One-Dimensional Causal Image Tokenization

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual tokenization methods struggle to simultaneously support causal autoregressive modeling and preserve spatial structure, often leading to training instability or degraded generation quality. This work proposes CaTok, the first approach that integrates a one-dimensional causal token sequence with a MeanFlow decoder, enabling efficient and high-fidelity image generation through a temporal interval selection mechanism. Additionally, CaTok introduces REPA-A regularization to align features with those of vision foundation models. The method supports both single-step fast generation and multi-step high-quality sampling, achieving state-of-the-art performance on ImageNet reconstruction with only 0.75 FID, 22.53 PSNR, and 0.674 SSIM—comparable to current advanced autoregressive models—while requiring fewer training epochs.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) language models rely on causal tokenization, but extending this paradigm to vision remains non-trivial. Current visual tokenizers either flatten 2D patches into non-causal sequences or enforce heuristic orderings that misalign with the"next-token prediction"pattern. Recent diffusion autoencoders similarly fall short: conditioning the decoder on all tokens lacks causality, while applying nested dropout mechanism introduces imbalance. To address these challenges, we present CaTok, a 1D causal image tokenizer with a MeanFlow decoder. By selecting tokens over time intervals and binding them to the MeanFlow objective, as illustrated in Fig. 1, CaTok learns causal 1D representations that support both fast one-step generation and high-fidelity multi-step sampling, while naturally capturing diverse visual concepts across token intervals. To further stabilize and accelerate training, we propose a straightforward regularization REPA-A, which aligns encoder features with Vision Foundation Models (VFMs). Experiments demonstrate that CaTok achieves state-of-the-art results on ImageNet reconstruction, reaching 0.75 FID, 22.53 PSNR and 0.674 SSIM with fewer training epochs, and the AR model attains performance comparable to leading approaches.
Problem

Research questions and friction points this paper is trying to address.

causal tokenization
autoregressive vision models
image tokenization
one-dimensional representation
next-token prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal tokenization
MeanFlow decoder
one-dimensional image representation
autoregressive vision modeling
REPA-A regularization
Yitong Chen
Yitong Chen
Fudan University
Computer Vision
Zuxuan Wu
Zuxuan Wu
Fudan University
X
Xipeng Qiu
Institute of Trustworthy Embodied AI, Fudan University; Shanghai Innovation Institute; Shanghai Key Laboratory of Multimodal Embodied AI
Yu-Gang Jiang
Yu-Gang Jiang
Professor, Fudan University. IEEE & IAPR Fellow
Video AnalysisEmbodied AITrustworthy AI