Learning to Expand Images for Efficient Visual Autoregressive Modeling

πŸ“… 2025-11-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing autoregressive vision generation models suffer from inefficiency in token-by-token decoding or complexity in multi-scale modeling. This paper proposes Expanding Autoregressive Representation (EAR), a novel framework inspired by human centric-outward visual perception, which employs a spiral token expansion order to explicitly model spatial continuity. EAR integrates parallel autoregressive decoding with a length-adaptive mechanism that dynamically adjusts the number of tokens predicted per stepβ€”thereby jointly optimizing generation quality, inference speed, and perceptual relevance. Experiments on ImageNet demonstrate that EAR achieves, for the first time within a single-scale autoregressive framework, a Pareto-optimal trade-off between synthesis fidelity and inference efficiency, significantly outperforming state-of-the-art methods. This work establishes a new paradigm for efficient, scalable, and cognitively aligned autoregressive visual modeling.

Technology Category

Application Category

πŸ“ Abstract
Autoregressive models have recently shown great promise in visual generation by leveraging discrete token sequences akin to language modeling. However, existing approaches often suffer from inefficiency, either due to token-by-token decoding or the complexity of multi-scale representations. In this work, we introduce Expanding Autoregressive Representation (EAR), a novel generation paradigm that emulates the human visual system's center-outward perception pattern. EAR unfolds image tokens in a spiral order from the center and progressively expands outward, preserving spatial continuity and enabling efficient parallel decoding. To further enhance flexibility and speed, we propose a length-adaptive decoding strategy that dynamically adjusts the number of tokens predicted at each step. This biologically inspired design not only reduces computational cost but also improves generation quality by aligning the generation order with perceptual relevance. Extensive experiments on ImageNet demonstrate that EAR achieves state-of-the-art trade-offs between fidelity and efficiency on single-scale autoregressive models, setting a new direction for scalable and cognitively aligned autoregressive image generation.
Problem

Research questions and friction points this paper is trying to address.

Inefficient token-by-token decoding in visual autoregressive models
Complexity issues with multi-scale representations in image generation
Poor alignment between generation order and perceptual relevance patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expanding Autoregressive Representation spiral generation
Length-adaptive decoding for dynamic token prediction
Center-outward perception pattern for spatial continuity
πŸ”Ž Similar Papers
No similar papers found.
R
Ruiqing Yang
University of Electronic Science and Technology of China
K
Kaixin Zhang
School of Computer Science and Engineering, Central South University
Z
Zheng Zhang
Xidian University
Shan You
Shan You
SenseTime Research
deep learningmultimodal LLMedge AI
T
Tao Huang
Shanghai Jiao Tong University