🤖 AI Summary
To address the challenges of excessively long sequences, high training costs, and difficulty in leveraging intrinsic image hierarchies in autoregressive image generation, this paper proposes the Next Patch Prediction (NPP) paradigm. NPP aggregates low-level image tokens into high-information-density patch tokens, drastically reducing sequence length; it introduces a novel multi-scale, coarse-to-fine hierarchical patch grouping strategy that requires no model architecture modifications, additional parameters, or custom tokenizers—ensuring generality and plug-and-play compatibility. Evaluated under standard autoregressive modeling and FID assessment on ImageNet, NPP reduces training cost to approximately 0.6× that of baseline methods while improving FID by up to 1.0. The improvement is consistent across models ranging from 100M to 1.4B parameters. The core contribution is the first seamless integration of inherent image hierarchy into autoregressive generation—achieving simultaneous gains in efficiency, sample quality, and deployment practicality.
📝 Abstract
Autoregressive models, built based on the Next Token Prediction (NTP) paradigm, show great potential in developing a unified framework that integrates both language and vision tasks. In this work, we rethink the NTP for autoregressive image generation and propose a novel Next Patch Prediction (NPP) paradigm. Our key idea is to group and aggregate image tokens into patch tokens containing high information density. With patch tokens as a shorter input sequence, the autoregressive model is trained to predict the next patch, thereby significantly reducing the computational cost. We further propose a multi-scale coarse-to-fine patch grouping strategy that exploits the natural hierarchical property of image data. Experiments on a diverse range of models (100M-1.4B parameters) demonstrate that the next patch prediction paradigm could reduce the training cost to around 0.6 times while improving image generation quality by up to 1.0 FID score on the ImageNet benchmark. We highlight that our method retains the original autoregressive model architecture without introducing additional trainable parameters or specifically designing a custom image tokenizer, thus ensuring flexibility and seamless adaptation to various autoregressive models for visual generation.