🤖 AI Summary
Existing visual tokenizers are typically constrained to single-modal, single-task paradigms—either reconstruction or understanding—struggling to simultaneously achieve high-fidelity reconstruction and cross-modal semantic understanding. This work introduces AToken, the first unified visual tokenizer that jointly addresses reconstruction and understanding for images, videos, and 3D assets within a single homogeneous framework. Its core innovations include: (i) a shared 4D latent space; (ii) a pure Transformer architecture with 4D rotary positional encoding, enabling arbitrary spatial resolution and temporal length inputs; and (iii) a non-adversarial training objective—combining perceptual loss and Gram matrix loss—alongside a progressive training strategy supporting both continuous and discrete tokens. Experiments demonstrate state-of-the-art performance across modalities: image (rFID=0.21, ImageNet classification accuracy=82.2%), video (rFVD=3.01, MSRVTT retrieval R@1=32.6%), and 3D (PSNR=28.19, classification accuracy=90.9%). AToken further enables cross-modal generation, including text-to-video and image-to-3D synthesis.
📝 Abstract
We present AToken, the first unified visual tokenizer that achieves both high-fidelity reconstruction and semantic understanding across images, videos, and 3D assets. Unlike existing tokenizers that specialize in either reconstruction or understanding for single modalities, AToken encodes these diverse visual inputs into a shared 4D latent space, unifying both tasks and modalities in a single framework. Specifically, we introduce a pure transformer architecture with 4D rotary position embeddings to process visual inputs of arbitrary resolutions and temporal durations. To ensure stable training, we introduce an adversarial-free training objective that combines perceptual and Gram matrix losses, achieving state-of-the-art reconstruction quality. By employing a progressive training curriculum, AToken gradually expands from single images, videos, and 3D, and supports both continuous and discrete latent tokens. AToken achieves 0.21 rFID with 82.2% ImageNet accuracy for images, 3.01 rFVD with 32.6% MSRVTT retrieval for videos, and 28.19 PSNR with 90.9% classification accuracy for 3D. In downstream applications, AToken enables both visual generation tasks (e.g., image generation with continuous and discrete tokens, text-to-video generation, image-to-3D synthesis) and understanding tasks (e.g., multimodal LLMs), achieving competitive performance across all benchmarks. These results shed light on the next-generation multimodal AI systems built upon unified visual tokenization.