NarrLV: Towards a Comprehensive Narrative-Centric Evaluation for Long Video Generation Models

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-video generation models lack dedicated evaluation benchmarks tailored to narrative expressiveness; mainstream benchmarks (e.g., VBench) support only simple narrative prompts and fail to capture complex narrative structures. Method: We propose NarrLV—the first comprehensive benchmark for evaluating narrative quality in long-video generation. NarrLV introduces Temporal Narrative Atoms (TNAs) as fundamental narrative units and builds a scalable automatic prompt generation pipeline. It establishes a three-tier evaluation framework covering narrative structure, coherence, and detail fidelity, integrating cinematic narrative theory with multimodal large language models (MLLMs) to enable automated, question-answering–based, multi-granularity assessment. Results: Experiments show high correlation between NarrLV scores and human judgments (Spearman’s ρ > 0.85). NarrLV effectively exposes critical bottlenecks in current models—particularly in long-range causal reasoning and character consistency—providing a reproducible, interpretable, and theoretically grounded evaluation standard for long-video generation.

Technology Category

Application Category

📝 Abstract
With the rapid development of foundation video generation technologies, long video generation models have exhibited promising research potential thanks to expanded content creation space. Recent studies reveal that the goal of long video generation tasks is not only to extend video duration but also to accurately express richer narrative content within longer videos. However, due to the lack of evaluation benchmarks specifically designed for long video generation models, the current assessment of these models primarily relies on benchmarks with simple narrative prompts (e.g., VBench). To the best of our knowledge, our proposed NarrLV is the first benchmark to comprehensively evaluate the Narrative expression capabilities of Long Video generation models. Inspired by film narrative theory, (i) we first introduce the basic narrative unit maintaining continuous visual presentation in videos as Temporal Narrative Atom (TNA), and use its count to quantitatively measure narrative richness. Guided by three key film narrative elements influencing TNA changes, we construct an automatic prompt generation pipeline capable of producing evaluation prompts with a flexibly expandable number of TNAs. (ii) Then, based on the three progressive levels of narrative content expression, we design an effective evaluation metric using the MLLM-based question generation and answering framework. (iii) Finally, we conduct extensive evaluations on existing long video generation models and the foundation generation models. Experimental results demonstrate that our metric aligns closely with human judgments. The derived evaluation outcomes reveal the detailed capability boundaries of current video generation models in narrative content expression.
Problem

Research questions and friction points this paper is trying to address.

Lack of narrative-centric benchmarks for long video evaluation
Need to measure narrative richness in generated long videos
Current models require better narrative content expression assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Temporal Narrative Atom for richness measurement
Designs automatic prompt generation pipeline for TNAs
Develops MLLM-based metric for narrative evaluation
🔎 Similar Papers
No similar papers found.
Xiaokun Feng
Xiaokun Feng
Institute of Automation,Chinese Academy of Sciences
computer versiondeep learning
H
Haiming Yu
AMAP, Alibaba Group
Meiqi Wu
Meiqi Wu
the University of Chinese Academy of Sciences
Computer vision
Shiyu Hu
Shiyu Hu
Research Fellow, Nanyang Technological University (NTU)
Computer VisionData-centric AIAI for Science
J
Jintao Chen
AMAP, Alibaba Group; PKU
C
Chen Zhu
AMAP, Alibaba Group
J
Jiahong Wu
AMAP, Alibaba Group
X
Xiangxiang Chu
AMAP, Alibaba Group
K
Kaiqi Huang
School of Artificial Intelligence, UCAS; CASIA