🤖 AI Summary
To address the quadratic computational complexity and difficulty in modeling long-range spatiotemporal dependencies inherent in Transformer-based text-to-video generation, this paper proposes MM-DiM, an efficient multimodal sequence modeling framework built upon the Mamba architecture. Methodologically, it introduces: (1) the first Multimodal Diffusion Mamba (MM-DiM) module, jointly encoding textual and spatiotemporal features; (2) a multimodal token recombination mechanism to enhance cross-modal alignment efficiency; and (3) a visual quality-aware reward learning strategy to mitigate image degradation in long-horizon autoregressive generation. Empirically, MM-DiM achieves high-definition video synthesis (768×1280) with linear-time complexity, reducing computational cost by 45% over baseline Transformers. It consistently outperforms state-of-the-art methods across multiple text-to-video benchmarks, delivering significant improvements in both generation fidelity and inference speed.
📝 Abstract
Text-to-video generation has significantly enriched content creation and holds the potential to evolve into powerful world simulators. However, modeling the vast spatiotemporal space remains computationally demanding, particularly when employing Transformers, which incur quadratic complexity in sequence processing and thus limit practical applications. Recent advancements in linear-time sequence modeling, particularly the Mamba architecture, offer a more efficient alternative. Nevertheless, its plain design limits its direct applicability to multi-modal and spatiotemporal video generation tasks. To address these challenges, we introduce M4V, a Multi-Modal Mamba framework for text-to-video generation. Specifically, we propose a multi-modal diffusion Mamba (MM-DiM) block that enables seamless integration of multi-modal information and spatiotemporal modeling through a multi-modal token re-composition design. As a result, the Mamba blocks in M4V reduce FLOPs by 45% compared to the attention-based alternative when generating videos at 768$ imes$1280 resolution. Additionally, to mitigate the visual quality degradation in long-context autoregressive generation processes, we introduce a reward learning strategy that further enhances per-frame visual realism. Extensive experiments on text-to-video benchmarks demonstrate M4V's ability to produce high-quality videos while significantly lowering computational costs. Code and models will be publicly available at https://huangjch526.github.io/M4V_project.