D-AR: Diffusion via Autoregressive Models

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models and autoregressive language models employ fundamentally distinct architectures, hindering unified visual and linguistic generative modeling. Method: This work reformulates image diffusion as standard autoregressive sequence modeling: an image discrete tokenizer maps the denoising process onto a coarse-to-fine ordered token sequence; training and inference use conventional causal attention and next-token prediction—without altering the diffusion objective or adding auxiliary modules. Contribution/Results: It achieves, for the first time, lossless reconstruction of the full diffusion process using a pure autoregressive architecture, enabling real-time generation preview and zero-shot layout control. Evaluated on ImageNet, the model attains a FID score of 2.09, demonstrating that autoregressive models can generate high-fidelity images competitively. This establishes a scalable, interpretable pathway toward large language model–driven general-purpose visual generation, bridging architectural disparities between vision and language modeling.

Technology Category

Application Category

📝 Abstract
This paper presents Diffusion via Autoregressive models (D-AR), a new paradigm recasting the image diffusion process as a vanilla autoregressive procedure in the standard next-token-prediction fashion. We start by designing the tokenizer that converts images into sequences of discrete tokens, where tokens in different positions can be decoded into different diffusion denoising steps in the pixel space. Thanks to the diffusion properties, these tokens naturally follow a coarse-to-fine order, which directly lends itself to autoregressive modeling. Therefore, we apply standard next-token prediction on these tokens, without modifying any underlying designs (either causal masks or training/inference strategies), and such sequential autoregressive token generation directly mirrors the diffusion procedure in image space. That is, once the autoregressive model generates an increment of tokens, we can directly decode these tokens into the corresponding diffusion denoising step in the streaming manner. Our pipeline naturally reveals several intriguing properties, for example, it supports consistent previews when generating only a subset of tokens and enables zero-shot layout-controlled synthesis. On the standard ImageNet benchmark, our method achieves 2.09 FID using a 775M Llama backbone with 256 discrete tokens. We hope our work can inspire future research on unified autoregressive architectures of visual synthesis, especially with large language models. Code and models will be available at https://github.com/showlab/D-AR
Problem

Research questions and friction points this paper is trying to address.

Recasts image diffusion as autoregressive next-token prediction
Converts images to coarse-to-fine ordered discrete tokens
Enables zero-shot layout control with standard AR modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recasts diffusion as autoregressive next-token prediction
Tokenizer converts images to coarse-to-fine tokens
Enables streaming denoising with standard autoregressive models
🔎 Similar Papers
No similar papers found.
Ziteng Gao
Ziteng Gao
National University of Singapore
Computer VisionGenerative Modeling
M
Mike Zheng Shou
Show Lab, National University of Singapore