🤖 AI Summary
This work addresses the limitation of conventional causal one-dimensional autoregressive (AR) vision generative models, which rely on fixed-length token sequences and struggle to adapt to varying image complexity. To overcome this, the authors propose the Soft Tail-dropping Adaptive Tokenizer (STAT), an end-to-end trainable, one-dimensional discrete adaptive tokenizer that dynamically adjusts the number of output tokens by assigning each token a monotonically decreasing retention probability aligned with image-level complexity. STAT is inherently compatible with causal AR architectures and, when applied to ImageNet-1k, enables standard AR models to match or surpass the performance of other probabilistic generative models while demonstrating strong scaling properties.
📝 Abstract
We present Soft Tail-dropping Adaptive Tokenizer (STAT), a 1D discrete visual tokenizer that adaptively chooses the number of output tokens per image according to its structural complexity and level of detail. STAT encodes an image into a sequence of discrete codes together with per-token keep probabilities. Beyond standard autoencoder objectives, we regularize these keep probabilities to be monotonically decreasing along the sequence and explicitly align their distribution with an image-level complexity measure. As a result, STAT produces length-adaptive 1D visual tokens that are naturally compatible with causal 1D autoregressive (AR) visual generative models. On ImageNet-1k, equipping vanilla causal AR models with STAT yields competitive or superior visual generation quality compared to other probabilistic model families, while also exhibiting favorable scaling behavior that has been elusive in prior vanilla AR visual generation attempts.