Diffusion Transformer-to-Mamba Distillation for High-Resolution Image Generation

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers (DiTs) incur prohibitive computational overhead in high-resolution image generation due to the quadratic complexity of self-attention. Method: This paper introduces the first Transformer-to-Mamba knowledge distillation framework tailored for diffusion models. It innovatively adapts the linear-complexity Mamba architecture to non-causal visual generation tasks, designing layer-wise teacher forcing and feature-level knowledge distillation, alongside a lightweight hybrid architecture. High-resolution fine-tuning enables high-fidelity text-to-image synthesis at resolutions from 512×512 up to 2048×2048. Contribution/Results: The method drastically reduces training cost while preserving global contextual modeling capability. At 2048×2048 resolution, generated image quality matches that of DiT baselines—demonstrating, for the first time, the feasibility of causal sequence models for high-fidelity visual content generation.

Technology Category

Application Category

📝 Abstract
The quadratic computational complexity of self-attention in diffusion transformers (DiT) introduces substantial computational costs in high-resolution image generation. While the linear-complexity Mamba model emerges as a potential alternative, direct Mamba training remains empirically challenging. To address this issue, this paper introduces diffusion transformer-to-mamba distillation (T2MD), forming an efficient training pipeline that facilitates the transition from the self-attention-based transformer to the linear complexity state-space model Mamba. We establish a diffusion self-attention and Mamba hybrid model that simultaneously achieves efficiency and global dependencies. With the proposed layer-level teacher forcing and feature-based knowledge distillation, T2MD alleviates the training difficulty and high cost of a state space model from scratch. Starting from the distilled 512$ imes$512 resolution base model, we push the generation towards 2048$ imes$2048 images via lightweight adaptation and high-resolution fine-tuning. Experiments demonstrate that our training path leads to low overhead but high-quality text-to-image generation. Importantly, our results also justify the feasibility of using sequential and causal Mamba models for generating non-causal visual output, suggesting the potential for future exploration.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational cost in high-resolution image generation
Train linear-complexity Mamba model effectively
Enable Mamba for non-causal visual output generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion transformer-to-Mamba distillation (T2MD)
Hybrid model with self-attention and Mamba
Layer-level teacher forcing and feature distillation
🔎 Similar Papers
No similar papers found.
Y
Yuan Yao
University of Rochester
Yicong Hong
Yicong Hong
Adobe Research
Video GenerationWorld ModelsEmbodied AI
Difan Liu
Difan Liu
Research Scientist, Adobe Research
Computer VisionComputer Graphics
L
Long Mai
Adobe Research
F
Feng Liu
Adobe Research
J
Jiebo Luo
University of Rochester