🤖 AI Summary
Existing video generation methods—particularly diffusion-based approaches—struggle to model long-term narrative structure and cross-shot character consistency, hindering cinematic-quality long-video synthesis. To address this, we propose a hierarchical long-video generation framework: an autoregressive visual token predictor operates at the top level to model global plot progression, while a conditional diffusion model at the bottom level ensures high-fidelity frame rendering. We introduce, for the first time, multimodal script encoding to explicitly enforce cross-scene consistency in character appearance, motion, and stylistic attributes. This architecture decouples narrative reasoning from visual synthesis, enabling minute-scale video generation. Evaluated on diverse cinematic datasets, our method significantly improves generated video length, narrative coherence, and visual fidelity, achieving state-of-the-art performance.
📝 Abstract
Recent advancements in video generation have primarily leveraged diffusion models for short-duration content. However, these approaches often fall short in modeling complex narratives and maintaining character consistency over extended periods, which is essential for long-form video production like movies. We propose MovieDreamer, a novel hierarchical framework that integrates the strengths of autoregressive models with diffusion-based rendering to pioneer long-duration video generation with intricate plot progressions and high visual fidelity. Our approach utilizes autoregressive models for global narrative coherence, predicting sequences of visual tokens that are subsequently transformed into high-quality video frames through diffusion rendering. This method is akin to traditional movie production processes, where complex stories are factorized down into manageable scene capturing. Further, we employ a multimodal script that enriches scene descriptions with detailed character information and visual style, enhancing continuity and character identity across scenes. We present extensive experiments across various movie genres, demonstrating that our approach not only achieves superior visual and narrative quality but also effectively extends the duration of generated content significantly beyond current capabilities. Homepage: https://aim-uofa.github.io/MovieDreamer/.