DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models suffer from low generation efficiency, suboptimal utilization of visual information, and difficulty in unifying text and image modeling. To address these limitations, we propose DART: the first non-Markovian Transformer framework that integrates autoregressive modeling into the diffusion paradigm. DART operates directly in the continuous pixel space—bypassing image quantization and discrete tokenization—and jointly models spatial and spectral dimensions across iterative denoising steps. Its core innovation lies in enabling end-to-end transfer of language model architectures to image generation, supporting cross-modal joint training of text and images. Experiments demonstrate that DART achieves competitive generation quality with state-of-the-art diffusion models on both class-conditional and text-to-image synthesis tasks, while significantly improving training and inference efficiency and exhibiting strong scalability.

Technology Category

Application Category

📝 Abstract
Diffusion models have become the dominant approach for visual generation. They are trained by denoising a Markovian process which gradually adds noise to the input. We argue that the Markovian property limits the model's ability to fully utilize the generation trajectory, leading to inefficiencies during training and inference. In this paper, we propose DART, a transformer-based model that unifies autoregressive (AR) and diffusion within a non-Markovian framework. DART iteratively denoises image patches spatially and spectrally using an AR model that has the same architecture as standard language models. DART does not rely on image quantization, which enables more effective image modeling while maintaining flexibility. Furthermore, DART seamlessly trains with both text and image data in a unified model. Our approach demonstrates competitive performance on class-conditioned and text-to-image generation tasks, offering a scalable, efficient alternative to traditional diffusion models. Through this unified framework, DART sets a new benchmark for scalable, high-quality image synthesis.
Problem

Research questions and friction points this paper is trying to address.

Image Generation
Diffusion Models
Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

DART
Image Generation
Text-to-Image Synthesis
🔎 Similar Papers
No similar papers found.