🤖 AI Summary
Existing diffusion-based language models suffer from limited modeling depth and insufficient sample quality and stability in arbitrary-order generation. This work proposes the A3 framework, which generalizes autoregressive modeling into a structured multi-set prediction process that supports arbitrary token subsets and generation orders, thereby unifying the probabilistic rigor of autoregressive models with the generative flexibility of diffusion models. Leveraging a dual-stream attention architecture and a progressive adaptation strategy, A3 efficiently transforms pretrained autoregressive models into arbitrary-order generators. Experiments demonstrate that the proposed method significantly outperforms existing diffusion models on question answering, commonsense reasoning, and story completion tasks, while preserving efficient parallel and bidirectional decoding capabilities.
📝 Abstract
Diffusion language models enable any-order generation and bidirectional conditioning, offering appealing flexibility for tasks such as infilling, rewriting, and self-correction. However, their formulation-predicting one part of a sequence from another within a single-step dependency-limits modeling depth and often yields lower sample quality and stability than autoregressive (AR) models. To address this, we revisit autoregressive modeling as a foundation and reformulate diffusion-style training into a structured multi-group prediction process. We propose Any-order Any-subset Autoregressive modeling (A3), a generalized framework that extends the standard AR factorization to arbitrary token groups and generation orders. A3 preserves the probabilistic rigor and multi-layer dependency modeling of AR while inheriting diffusion models'flexibility for parallel and bidirectional generation. We implement A3 through a two-stream attention architecture and a progressive adaptation strategy that transitions pretrained AR models toward any-order prediction. Experiments on question answering, commonsense reasoning, and story infilling demonstrate that A3 outperforms diffusion-based models while maintaining flexible decoding. This work offers a unified approach for a flexible, efficient, and novel language modeling paradigm.