🤖 AI Summary
Visual autoregressive modeling faces challenges in scalability, long-range dependency capture, computational efficiency, and geometric/physical consistency—particularly for image, video, 3D, and multimodal generation. Method: We systematically survey ~250 works, unifying pixel-level, token-level, and scale-level representations; integrating sequence modeling, discrete representation learning (e.g., VQ-VAE, DALL·E tokenizer), causal attention, and hierarchical autoregressive decoding; and establishing connections to diffusion models and GANs. Contribution/Results: We propose the first comprehensive taxonomy spanning representation granularities and cross-cutting dimensions (hierarchical, multimodal, task-agnostic), construct an open-source knowledge base, identify core bottlenecks—including 3D structural priors and inference latency—and outline future directions in scalable architectures, efficient sampling, and physics-aware generation.
📝 Abstract
Autoregressive modeling has been a huge success in the field of natural language processing (NLP). Recently, autoregressive models have emerged as a significant area of focus in computer vision, where they excel in producing high-quality visual content. Autoregressive models in NLP typically operate on subword tokens. However, the representation strategy in computer vision can vary in different levels, i.e., pixel-level, token-level, or scale-level, reflecting the diverse and hierarchical nature of visual data compared to the sequential structure of language. This survey comprehensively examines the literature on autoregressive models applied to vision. To improve readability for researchers from diverse research backgrounds, we start with preliminary sequence representation and modeling in vision. Next, we divide the fundamental frameworks of visual autoregressive models into three general sub-categories, including pixel-based, token-based, and scale-based models based on the representation strategy. We then explore the interconnections between autoregressive models and other generative models. Furthermore, we present a multifaceted categorization of autoregressive models in computer vision, including image generation, video generation, 3D generation, and multimodal generation. We also elaborate on their applications in diverse domains, including emerging domains such as embodied AI and 3D medical AI, with about 250 related references. Finally, we highlight the current challenges to autoregressive models in vision with suggestions about potential research directions. We have also set up a Github repository to organize the papers included in this survey at: https://github.com/ChaofanTao/Autoregressive-Models-in-Vision-Survey.