Token-Shuffle: Towards High-Resolution Image Generation with Autoregressive Models

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive (AR) image generation suffers from prohibitive computational overhead at high resolutions due to the excessive number of image tokens, severely limiting training and inference efficiency. To address this, we propose a token-shuffle/unshuffle mechanism that locally merges image tokens along the channel dimension to drastically compress sequence length during training, followed by spatial decoupling for reconstruction during inference—requiring no additional pretraining or separate text encoder and enabling end-to-end joint optimization. This work achieves, for the first time, pure AR text-to-image generation at 2048×2048 resolution. Leveraging redundancy between low-dimensional visual codes and high-dimensional linguistic vocabularies, we design a lightweight, unified next-token prediction architecture. On GenAI-benchmark, our 2.7B-parameter model scores 0.77 on challenging prompts—surpassing LlamaGen (AR) by 0.18 and LDM (diffusion) by 0.15. Large-scale human evaluation confirms superior performance in text alignment, visual fidelity, and perceptual quality.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) models, long dominant in language generation, are increasingly applied to image synthesis but are often considered less competitive than Diffusion-based models. A primary limitation is the substantial number of image tokens required for AR models, which constrains both training and inference efficiency, as well as image resolution. To address this, we present Token-Shuffle, a novel yet simple method that reduces the number of image tokens in Transformer. Our key insight is the dimensional redundancy of visual vocabularies in Multimodal Large Language Models (MLLMs), where low-dimensional visual codes from visual encoder are directly mapped to high-dimensional language vocabularies. Leveraging this, we consider two key operations: token-shuffle, which merges spatially local tokens along channel dimension to decrease the input token number, and token-unshuffle, which untangles the inferred tokens after Transformer blocks to restore the spatial arrangement for output. Jointly training with textual prompts, our strategy requires no additional pretrained text-encoder and enables MLLMs to support extremely high-resolution image synthesis in a unified next-token prediction way while maintaining efficient training and inference. For the first time, we push the boundary of AR text-to-image generation to a resolution of 2048x2048 with gratifying generation performance. In GenAI-benchmark, our 2.7B model achieves 0.77 overall score on hard prompts, outperforming AR models LlamaGen by 0.18 and diffusion models LDM by 0.15. Exhaustive large-scale human evaluations also demonstrate our prominent image generation ability in terms of text-alignment, visual flaw, and visual appearance. We hope that Token-Shuffle can serve as a foundational design for efficient high-resolution image generation within MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Reduces image tokens to enhance AR model efficiency
Enables high-resolution image synthesis up to 2048x2048
Improves text-to-image alignment and visual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-Shuffle reduces image tokens in Transformer
Merges local tokens along channel dimension
Enables 2048x2048 resolution image generation
🔎 Similar Papers
No similar papers found.
X
Xu Ma
Northeastern University
Peize Sun
Peize Sun
Meta FAIR; HKU
Computer VisionDeep Learning
H
Haoyu Ma
Meta GenAI
H
Hao Tang
Meta FAIR
Chih-Yao Ma
Chih-Yao Ma
Member of Technical Staff @ Microsoft AI
Generative ModelComputer VisionNatural Language ProcessingMachine LearningDeep Learning
Jialiang Wang
Jialiang Wang
Research Scientist, Meta AI
Computer VisionGenerative AI
Kunpeng Li
Kunpeng Li
Research Scientist, Meta Superintelligence Labs
Computer VisionDeep Learning
Xiaoliang Dai
Xiaoliang Dai
Research Scientist, Meta GenAI
Generative AIComputer vision
Y
Yujun Shi
National University of Singapore
Xuan Ju
Xuan Ju
The Chinese University of Hong Kong
Multimodal Image&Video GenerationComputer Vision
Yushi Hu
Yushi Hu
University of Washington
Natural Language ProcessingComputer Vision
Artsiom Sanakoyeu
Artsiom Sanakoyeu
Research Scientist at GenAI
Computer VisionGenerative AI
Felix Juefei-Xu
Felix Juefei-Xu
Research Scientist, Meta Superintelligence Labs
Generative ModelsDeep LearningComputer VisionAI SafetyAdversarial Robustness
Ji Hou
Ji Hou
Research Scientist, Meta Superintelligence Labs
Generative AI3D Computer Vision
Junjiao Tian
Junjiao Tian
PhD, Georgia Institute of Technology
Machine LearningComputer VisionNatural Language ProcessingRobotics
T
Tao Xu
Meta GenAI
Tingbo Hou
Tingbo Hou
Google DeepMind
Computer VisionGenerative AI
Yen-Cheng Liu
Yen-Cheng Liu
Research Scientist, Meta
Computer VisionMachine LearningArtificial Intelligence
Zecheng He
Zecheng He
Meta GenAI
Generative AIEfficient ModelAI Security and Privacy
Z
Zijian He
Meta GenAI
Matt Feiszli
Matt Feiszli
Facebook AI Research
Machine LearningComputer VisionHarmonic AnalysisGeometry
Peizhao Zhang
Peizhao Zhang
Research Scientist, Meta MSL
Computer VisionComputer Graphics
P
Peter Vajda
Meta GenAI
Sam S. Tsai
Sam S. Tsai
Stealth Startup, ex-Meta, ex-Amazon, Stanford
Generative AIMLLMVisual SearchComputer VisionMultimedia
Y
Yun Fu
Northeastern University