🤖 AI Summary
Long-video generation faces challenges in modeling long-range temporal dependencies: diffusion Transformers are constrained by the quadratic computational complexity of self-attention, hindering efficient maintenance of minute-scale spatiotemporal consistency. To address this, we propose Mixture of Contexts (MoC), a novel mechanism that reformulates video generation as a dynamic information retrieval task. MoC employs a learnable sparse attention routing module to causally isolate and select salient context chunks and anchor frames, enabling near-linear scalability. Integrated with local window constraints and the diffusion Transformer architecture, MoC significantly reduces memory and computational overhead while preserving long-term consistency in identity, motion, and scene structure. Experiments demonstrate that MoC enables efficient training and high-fidelity synthesis of minute-long videos. Our approach establishes a scalable, principled paradigm for long-context generative modeling.
📝 Abstract
Long video generation is fundamentally a long context memory problem: models must retain and retrieve salient events across a long range without collapsing or drifting. However, scaling diffusion transformers to generate long-context videos is fundamentally limited by the quadratic cost of self-attention, which makes memory and computation intractable and difficult to optimize for long sequences. We recast long-context video generation as an internal information retrieval task and propose a simple, learnable sparse attention routing module, Mixture of Contexts (MoC), as an effective long-term memory retrieval engine. In MoC, each query dynamically selects a few informative chunks plus mandatory anchors (caption, local windows) to attend to, with causal routing that prevents loop closures. As we scale the data and gradually sparsify the routing, the model allocates compute to salient history, preserving identities, actions, and scenes over minutes of content. Efficiency follows as a byproduct of retrieval (near-linear scaling), which enables practical training and synthesis, and the emergence of memory and consistency at the scale of minutes.