Autoregressive Distillation of Diffusion Transformers

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion Transformers achieve high generation quality but incur substantial computational overhead due to multi-step sampling; conversely, few-step distillation methods based on probability flow ODEs suffer from exposure bias. This paper proposes Autoregressive Distillation (ARD), a novel distillation paradigm that leverages historical denoising trajectories from the ODE solver to predict future denoising steps. ARD models temporal dependencies via token-level time embeddings and block-wise causal attention masks, while injecting historical information solely at the bottom transformer layer to balance efficiency and fidelity. On ImageNet-256, ARD achieves a FID of 1.84 with only four sampling steps—reducing performance degradation by 5× over baseline distillation methods while increasing FLOPs by merely 1.1%. In text-to-image synthesis, ARD surpasses existing 1024p distilled models in prompt adherence.

Technology Category

Application Category

📝 Abstract
Diffusion models with transformer architectures have demonstrated promising capabilities in generating high-fidelity images and scalability for high resolution. However, iterative sampling process required for synthesis is very resource-intensive. A line of work has focused on distilling solutions to probability flow ODEs into few-step student models. Nevertheless, existing methods have been limited by their reliance on the most recent denoised samples as input, rendering them susceptible to exposure bias. To address this limitation, we propose AutoRegressive Distillation (ARD), a novel approach that leverages the historical trajectory of the ODE to predict future steps. ARD offers two key benefits: 1) it mitigates exposure bias by utilizing a predicted historical trajectory that is less susceptible to accumulated errors, and 2) it leverages the previous history of the ODE trajectory as a more effective source of coarse-grained information. ARD modifies the teacher transformer architecture by adding token-wise time embedding to mark each input from the trajectory history and employs a block-wise causal attention mask for training. Furthermore, incorporating historical inputs only in lower transformer layers enhances performance and efficiency. We validate the effectiveness of ARD in a class-conditioned generation on ImageNet and T2I synthesis. Our model achieves a $5 imes$ reduction in FID degradation compared to the baseline methods while requiring only 1.1% extra FLOPs on ImageNet-256. Moreover, ARD reaches FID of 1.84 on ImageNet-256 in merely 4 steps and outperforms the publicly available 1024p text-to-image distilled models in prompt adherence score with a minimal drop in FID compared to the teacher. Project page: https://github.com/alsdudrla10/ARD.
Problem

Research questions and friction points this paper is trying to address.

Reduces resource-intensive iterative sampling in diffusion models
Mitigates exposure bias in distilled few-step student models
Improves image generation fidelity and efficiency with historical ODE trajectory
Innovation

Methods, ideas, or system contributions that make the work stand out.

AutoRegressive Distillation leverages historical ODE trajectory
Token-wise time embedding marks trajectory history inputs
Block-wise causal attention mask enhances training efficiency
🔎 Similar Papers
No similar papers found.