LVBench: An Extreme Long Video Understanding Benchmark

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 105
Influential: 13
📄 PDF
🤖 AI Summary
Existing short-video understanding benchmarks inadequately support real-world applications requiring hour-scale video comprehension—such as embodied intelligence, in-depth film criticism, and live sports commentary. To address this gap, we introduce LVBench, the first benchmark dedicated to *extreme-long-video understanding* (i.e., videos spanning several hours). LVBench systematically defines an evaluation paradigm encompassing diverse publicly available multi-source videos and multiple tasks—including temporal reasoning, event provenance tracing, and fine-grained question answering—with explicit emphasis on modeling cross-hour semantic coherence. Leveraging structured annotations and a hierarchical evaluation protocol (span-level, global-level, and temporal-dependency-level), it enables end-to-end assessment of multimodal large models. Empirical results show that state-of-the-art models underperform human annotators by 32.7% on average, confirming the benchmark’s substantial difficulty. The full codebase and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Recent progress in multimodal large language models has markedly enhanced the understanding of short videos (typically under one minute), and several evaluation datasets have emerged accordingly. However, these advancements fall short of meeting the demands of real-world applications such as embodied intelligence for long-term decision-making, in-depth movie reviews and discussions, and live sports commentary, all of which require comprehension of long videos spanning several hours. To address this gap, we introduce LVBench, a benchmark specifically designed for long video understanding. Our dataset comprises publicly sourced videos and encompasses a diverse set of tasks aimed at long video comprehension and information extraction. LVBench is designed to challenge multimodal models to demonstrate long-term memory and extended comprehension capabilities. Our extensive evaluations reveal that current multimodal models still underperform on these demanding long video understanding tasks. Through LVBench, we aim to spur the development of more advanced models capable of tackling the complexities of long video comprehension. Our data and code are publicly available at: https://lvbench.github.io.
Problem

Research questions and friction points this paper is trying to address.

Addressing long video understanding beyond short clips
Evaluating multimodal models on extended comprehension tasks
Challenging models with hours-long video memory demands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces LVBench benchmark for long videos
Comprises diverse tasks for extended comprehension
Challenges models with long-term memory capabilities
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
W
Weihan Wang
Tsinghua University
Z
Zehai He
Tsinghua University
Wenyi Hong
Wenyi Hong
Tsinghua University
multimodal pretraining
Y
Yean Cheng
Peking University
X
Xiaohan Zhang
Zhipu AI
J
Ji Qi
Tsinghua University
Shiyu Huang
Shiyu Huang
XPENG; Tsinghua University
VLMLLMRLAIGCRobotics
B
Bin Xu
Tsinghua University
Yuxiao Dong
Yuxiao Dong
CS, Tsinghua University
Large Language ModelsVision Language ModelsLLM ReasoningLLM AgentGraph Machine Learning
M
Ming Ding
Zhipu AI
Jie Tang
Jie Tang
UW Madison
Computed Tomography