🤖 AI Summary
Current long-video generation models suffer from significant limitations in narrative coherence, semantic–visual alignment, and informational richness. This work focuses on the culinary domain, introducing the first large-scale, high-quality cooking video dataset and proposing the “Long-Narrative Video Director Framework.” Methodologically, it (1) establishes a novel stage-wise generation paradigm tailored for long temporal narratives; (2) designs a keyframe-level visual–semantic embedding alignment mechanism to ensure fine-grained consistency; and (3) incorporates joint text–image fine-tuning to enhance cross-modal semantic fusion. Experimental results demonstrate that our approach substantially outperforms state-of-the-art methods in both visual fidelity and event-logical accuracy. It generates coherent, detail-rich, and structurally clear videos spanning tens of seconds. This work establishes a new paradigm for domain-specific, long-narrative video generation, advancing both data curation and architectural design for temporally extended multimodal synthesis.
📝 Abstract
Recent video generation models have shown promising results in producing high-quality video clips lasting several seconds. However, these models face challenges in generating long sequences that convey clear and informative events, limiting their ability to support coherent narrations. In this paper, we present a large-scale cooking video dataset designed to advance long-form narrative generation in the cooking domain. We validate the quality of our proposed dataset in terms of visual fidelity and textual caption accuracy using state-of-the-art Vision-Language Models (VLMs) and video generation models, respectively. We further introduce a Long Narrative Video Director to enhance both visual and semantic coherence in generated videos and emphasize the role of aligning visual embeddings to achieve improved overall video quality. Our method demonstrates substantial improvements in generating visually detailed and semantically aligned keyframes, supported by finetuning techniques that integrate text and image embeddings within the video generation process. Project page: https://videoauteur.github.io/