🤖 AI Summary
Existing visual tokenization methods struggle to simultaneously support causal autoregressive modeling and preserve spatial structure, often leading to training instability or degraded generation quality. This work proposes CaTok, the first approach that integrates a one-dimensional causal token sequence with a MeanFlow decoder, enabling efficient and high-fidelity image generation through a temporal interval selection mechanism. Additionally, CaTok introduces REPA-A regularization to align features with those of vision foundation models. The method supports both single-step fast generation and multi-step high-quality sampling, achieving state-of-the-art performance on ImageNet reconstruction with only 0.75 FID, 22.53 PSNR, and 0.674 SSIM—comparable to current advanced autoregressive models—while requiring fewer training epochs.
📝 Abstract
Autoregressive (AR) language models rely on causal tokenization, but extending this paradigm to vision remains non-trivial. Current visual tokenizers either flatten 2D patches into non-causal sequences or enforce heuristic orderings that misalign with the"next-token prediction"pattern. Recent diffusion autoencoders similarly fall short: conditioning the decoder on all tokens lacks causality, while applying nested dropout mechanism introduces imbalance. To address these challenges, we present CaTok, a 1D causal image tokenizer with a MeanFlow decoder. By selecting tokens over time intervals and binding them to the MeanFlow objective, as illustrated in Fig. 1, CaTok learns causal 1D representations that support both fast one-step generation and high-fidelity multi-step sampling, while naturally capturing diverse visual concepts across token intervals. To further stabilize and accelerate training, we propose a straightforward regularization REPA-A, which aligns encoder features with Vision Foundation Models (VFMs). Experiments demonstrate that CaTok achieves state-of-the-art results on ImageNet reconstruction, reaching 0.75 FID, 22.53 PSNR and 0.674 SSIM with fewer training epochs, and the AR model attains performance comparable to leading approaches.