Autoregressive Models Rival Diffusion Models at ANY-ORDER Generation

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion-based language models suffer from limited modeling depth and insufficient sample quality and stability in arbitrary-order generation. This work proposes the A3 framework, which generalizes autoregressive modeling into a structured multi-set prediction process that supports arbitrary token subsets and generation orders, thereby unifying the probabilistic rigor of autoregressive models with the generative flexibility of diffusion models. Leveraging a dual-stream attention architecture and a progressive adaptation strategy, A3 efficiently transforms pretrained autoregressive models into arbitrary-order generators. Experiments demonstrate that the proposed method significantly outperforms existing diffusion models on question answering, commonsense reasoning, and story completion tasks, while preserving efficient parallel and bidirectional decoding capabilities.

Technology Category

Application Category

📝 Abstract
Diffusion language models enable any-order generation and bidirectional conditioning, offering appealing flexibility for tasks such as infilling, rewriting, and self-correction. However, their formulation-predicting one part of a sequence from another within a single-step dependency-limits modeling depth and often yields lower sample quality and stability than autoregressive (AR) models. To address this, we revisit autoregressive modeling as a foundation and reformulate diffusion-style training into a structured multi-group prediction process. We propose Any-order Any-subset Autoregressive modeling (A3), a generalized framework that extends the standard AR factorization to arbitrary token groups and generation orders. A3 preserves the probabilistic rigor and multi-layer dependency modeling of AR while inheriting diffusion models'flexibility for parallel and bidirectional generation. We implement A3 through a two-stream attention architecture and a progressive adaptation strategy that transitions pretrained AR models toward any-order prediction. Experiments on question answering, commonsense reasoning, and story infilling demonstrate that A3 outperforms diffusion-based models while maintaining flexible decoding. This work offers a unified approach for a flexible, efficient, and novel language modeling paradigm.
Problem

Research questions and friction points this paper is trying to address.

autoregressive models
diffusion models
any-order generation
language modeling
bidirectional conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Any-order generation
Autoregressive modeling
Diffusion models
Two-stream attention
Flexible decoding
🔎 Similar Papers
No similar papers found.
Tianqi Du
Tianqi Du
PhD Student, Peking University
Machine learning
L
Lizhe Fang
National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University
W
Weijie Yang
School of Mathematical Sciences, Peking University
C
Chenheng Zhang
National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University
Zeming Wei
Zeming Wei
Ph.D. Candidate, Peking University
Trustworthy AIAdversarial RobustnessExplainability
Y
Yifei Wang
Amazon AGI SF Lab
Yisen Wang
Yisen Wang
Assistant Professor, Peking University
Machine LearningSelf-Supervised LearningLarge Language ModelsSafety