🤖 AI Summary
Diffusion Transformers (DiTs) incur prohibitive computational overhead in high-resolution image generation due to the quadratic complexity of self-attention.
Method: This paper introduces the first Transformer-to-Mamba knowledge distillation framework tailored for diffusion models. It innovatively adapts the linear-complexity Mamba architecture to non-causal visual generation tasks, designing layer-wise teacher forcing and feature-level knowledge distillation, alongside a lightweight hybrid architecture. High-resolution fine-tuning enables high-fidelity text-to-image synthesis at resolutions from 512×512 up to 2048×2048.
Contribution/Results: The method drastically reduces training cost while preserving global contextual modeling capability. At 2048×2048 resolution, generated image quality matches that of DiT baselines—demonstrating, for the first time, the feasibility of causal sequence models for high-fidelity visual content generation.
📝 Abstract
The quadratic computational complexity of self-attention in diffusion transformers (DiT) introduces substantial computational costs in high-resolution image generation. While the linear-complexity Mamba model emerges as a potential alternative, direct Mamba training remains empirically challenging. To address this issue, this paper introduces diffusion transformer-to-mamba distillation (T2MD), forming an efficient training pipeline that facilitates the transition from the self-attention-based transformer to the linear complexity state-space model Mamba. We establish a diffusion self-attention and Mamba hybrid model that simultaneously achieves efficiency and global dependencies. With the proposed layer-level teacher forcing and feature-based knowledge distillation, T2MD alleviates the training difficulty and high cost of a state space model from scratch. Starting from the distilled 512$ imes$512 resolution base model, we push the generation towards 2048$ imes$2048 images via lightweight adaptation and high-resolution fine-tuning. Experiments demonstrate that our training path leads to low overhead but high-quality text-to-image generation. Importantly, our results also justify the feasibility of using sequential and causal Mamba models for generating non-causal visual output, suggesting the potential for future exploration.