Radial Attention: $O(nlog n)$ Sparse Attention with Energy Decay for Long Video Generation

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video diffusion models face prohibitive computational overhead in spatiotemporal attention when generating long videos. This work identifies an inherent spatiotemporal energy decay pattern and proposes radial sparse attention: a physics-inspired mechanism that dynamically models spatial attention windows based on attenuation, integrates static sparse masking with time-distance-aware window scaling, and preserves representational capacity while drastically improving efficiency. The mechanism is compatible with LoRA fine-tuning and can be seamlessly integrated into mainstream video diffusion architectures. Experiments on multiple large-scale models demonstrate a 4.4× reduction in training cost, up to 3.7× faster inference, and the ability to generate videos up to four times longer than baseline—achieving 1.9× inference acceleration—with minimal hyperparameter tuning.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion models have enabled high-quality video generation, but the additional temporal dimension significantly increases computational costs, making training and inference on long videos prohibitively expensive. In this paper, we identify a phenomenon we term Spatiotemporal Energy Decay in video diffusion models: post-softmax attention scores diminish as spatial and temporal distance between tokens increase, akin to the physical decay of signal or waves over space and time in nature. Motivated by this, we propose Radial Attention, a scalable sparse attention mechanism with $O(n log n)$ complexity that translates energy decay into exponentially decaying compute density, which is significantly more efficient than standard $O(n^2)$ dense attention and more expressive than linear attention. Specifically, Radial Attention employs a simple, static attention mask where each token attends to spatially nearby tokens, with the attention window size shrinking with temporal distance. Moreover, it allows pre-trained video diffusion models to extend their generation length with efficient LoRA-based fine-tuning. Extensive experiments show that Radial Attention maintains video quality across Wan2.1-14B, HunyuanVideo, and Mochi 1, achieving up to a 1.9$ imes$ speedup over the original dense attention. With minimal tuning, it enables video generation up to 4$ imes$ longer while reducing training costs by up to 4.4$ imes$ compared to direct fine-tuning and accelerating inference by up to 3.7$ imes$ compared to dense attention inference.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs in long video generation
Addresses spatiotemporal energy decay in video diffusion models
Improves efficiency and scalability of attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Radial Attention reduces complexity to O(n log n)
Uses energy decay for sparse attention patterns
Enables efficient LoRA-based fine-tuning for longer videos
🔎 Similar Papers
No similar papers found.