Frame-Level Captions for Long Video Generation with Complex Multi Scenes

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of multi-scene narrative coherence and severe error accumulation in autoregressive long-video generation, this paper introduces frame-level textual annotation and frame-level attention mechanisms to enable independent, precise text guidance for each frame. Methodologically: (1) we propose a novel fine-grained frame-level annotation paradigm that supports differentiated textual control; (2) we introduce Diffusion Forcing—a training strategy that enhances temporal modeling flexibility by enforcing consistent latent trajectories across frames; and (3) we build a diffusion-based video generation model upon the WanX2.1-T2V-1.3B architecture. Evaluated on VBench 2.0’s “Complex Plots” and “Complex Landscapes” benchmarks, our approach significantly improves instruction adherence and inter-frame coherence, producing high-fidelity, narratively coherent long videos. This work establishes a new paradigm for complex narrative video generation.

Technology Category

Application Category

📝 Abstract
Generating long videos that can show complex stories, like movie scenes from scripts, has great promise and offers much more than short clips. However, current methods that use autoregression with diffusion models often struggle because their step-by-step process naturally leads to a serious error accumulation (drift). Also, many existing ways to make long videos focus on single, continuous scenes, making them less useful for stories with many events and changes. This paper introduces a new approach to solve these problems. First, we propose a novel way to annotate datasets at the frame-level, providing detailed text guidance needed for making complex, multi-scene long videos. This detailed guidance works with a Frame-Level Attention Mechanism to make sure text and video match precisely. A key feature is that each part (frame) within these windows can be guided by its own distinct text prompt. Our training uses Diffusion Forcing to provide the model with the ability to handle time flexibly. We tested our approach on difficult VBench 2.0 benchmarks ("Complex Plots"and"Complex Landscapes") based on the WanX2.1-T2V-1.3B model. The results show our method is better at following instructions in complex, changing scenes and creates high-quality long videos. We plan to share our dataset annotation methods and trained models with the research community. Project page: https://zgctroy.github.io/frame-level-captions .
Problem

Research questions and friction points this paper is trying to address.

Error accumulation in autoregressive diffusion models for long videos
Limited focus on single scenes in existing long video methods
Lack of detailed text guidance for multi-scene video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frame-level annotations for detailed text guidance
Frame-Level Attention Mechanism for precise alignment
Diffusion Forcing for flexible time handling
🔎 Similar Papers
No similar papers found.