VMoBA: Mixture-of-Block Attention for Video Diffusion Models

šŸ“… 2025-06-30
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Video diffusion models (VDMs) suffer from severe computational bottlenecks in long-sequence, high-resolution video generation due to the quadratic complexity of full attention. To address this, we propose a dynamic sparse attention mechanism specifically designed for video modeling. Our approach analyzes spatiotemporal attention patterns from pre-trained video Transformers and introduces an enhanced Mixture-of-Block Attention architecture. It integrates inter-layer recursive block partitioning, global block selection, and similarity-driven thresholded sparsification, while supporting adaptive 1D–2D–3D block configurations. Crucially, our method preserves or even surpasses the generation quality of full attention. Experimentally, it achieves 2.92Ɨ FLOPs reduction and 1.48Ɨ latency speedup during training, and 2.40Ɨ FLOPs reduction and 1.35Ɨ latency speedup during inference. These gains significantly improve efficiency and scalability for video generation without compromising fidelity.

Technology Category

Application Category

šŸ“ Abstract
The quadratic complexity of full attention mechanisms poses a significant bottleneck for Video Diffusion Models (VDMs) aiming to generate long-duration, high-resolution videos. While various sparse attention methods have been proposed, many are designed as training-free inference accelerators or do not optimally capture the unique spatio-temporal characteristics inherent in video data when trained natively. This paper introduces Video Mixture of Block Attention (VMoBA), a novel sparse attention mechanism specifically adapted for VDMs. Motivated by an in-depth analysis of attention patterns within pre-trained video transformers, which revealed strong spatio-temporal locality, varying query importance, and head-specific concentration levels, VMoBA enhances the original MoBA framework with three key modifications: (1) a layer-wise recurrent block partition scheme (1D-2D-3D) to dynamically adapt to diverse spatio-temporal attention patterns and improve efficiency; (2) global block selection to prioritize the most salient query-key block interactions across an entire attention head; and (3) threshold-based block selection to dynamically determine the number of attended blocks based on their cumulative similarity. Extensive experiments demonstrate that VMoBA significantly accelerates the training of VDMs on longer sequences, achieving 2.92x FLOPs and 1.48x latency speedup, while attaining comparable or even superior generation quality to full attention. Furthermore, VMoBA exhibits competitive performance in training-free inference, offering 2.40x FLOPs and 1.35x latency speedup for high-res video generation.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic complexity in Video Diffusion Models
Optimizes spatio-temporal attention for video generation
Improves training and inference efficiency for long videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise recurrent block partition scheme
Global block selection for salient interactions
Threshold-based dynamic block selection
šŸ”Ž Similar Papers
No similar papers found.