MaskFlow: Discrete Flows For Flexible and Efficient Long Video Generation

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in long-duration, high-fidelity video generation—namely, complex spatiotemporal dynamics modeling, hardware constraints, and generation-length bottlenecks. We propose the first unified framework integrating discrete video representation learning with flow matching. Our core innovation is a frame-level masking conditioning mechanism during training, enabling zero-shot adaptation to both timestep-dependent and timestep-independent models. The method supports two complementary generation modes: fully autoregressive and full-sequence generation, achieving efficient, high-fidelity synthesis up to 10× longer than training sequences. Technical components include discrete tokenization, spatiotemporal conditional modeling, and masked generative sampling (MGM-style). Evaluated on FaceForensics and DMLab, our approach matches state-of-the-art FVD performance while significantly accelerating sampling—thereby advancing scalability and efficiency in long-video generation.

Technology Category

Application Category

📝 Abstract
Generating long, high-quality videos remains a challenge due to the complex interplay of spatial and temporal dynamics and hardware limitations. In this work, we introduce extbf{MaskFlow}, a unified video generation framework that combines discrete representations with flow-matching to enable efficient generation of high-quality long videos. By leveraging a frame-level masking strategy during training, MaskFlow conditions on previously generated unmasked frames to generate videos with lengths ten times beyond that of the training sequences. MaskFlow does so very efficiently by enabling the use of fast Masked Generative Model (MGM)-style sampling and can be deployed in both fully autoregressive as well as full-sequence generation modes. We validate the quality of our method on the FaceForensics (FFS) and Deepmind Lab (DMLab) datasets and report Fr'echet Video Distance (FVD) competitive with state-of-the-art approaches. We also provide a detailed analysis on the sampling efficiency of our method and demonstrate that MaskFlow can be applied to both timestep-dependent and timestep-independent models in a training-free manner.
Problem

Research questions and friction points this paper is trying to address.

Efficient long video generation
Spatial-temporal dynamics handling
High-quality video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete flow-matching framework
Frame-level masking strategy
Masked Generative Model sampling
🔎 Similar Papers
No similar papers found.