MLVU: Benchmarking Multi-task Long Video Understanding

📅 2024-06-06
📈 Citations: 39
Influential: 7
📄 PDF
🤖 AI Summary
Existing video understanding benchmarks suffer from short video durations, limited domain diversity, and narrow task coverage, hindering comprehensive evaluation of multimodal large language models’ (MLLMs) long-video, multi-task reasoning capabilities. To address this, we introduce MLVU—the first holistic benchmark for multi-task long-video understanding—encompassing diverse video genres (e.g., films, surveillance footage, first-person videos) and supporting systematic evaluation across durations ranging from minutes to hours and 12 fine-grained tasks, including temporal localization, causal reasoning, and cross-modal question answering. MLVU establishes the first standardized assessment of MLLMs’ long-range temporal modeling, cross-modal alignment, and task generalization. Extensive experiments on 23 state-of-the-art MLLMs reveal a pronounced performance degradation with increasing video length, identifying context window capacity, visual encoder capability, and LLM backbone selection as fundamental bottlenecks. MLVU provides a reproducible benchmark and clear, actionable directions for advancement.

Technology Category

Application Category

📝 Abstract
The evaluation of Long Video Understanding (LVU) performance poses an important but challenging research problem. Despite previous efforts, the existing video understanding benchmarks are severely constrained by several issues, especially the insufficient lengths of videos, a lack of diversity in video types and evaluation tasks, and the inappropriateness for evaluating LVU performances. To address the above problems, we propose a new benchmark called MLVU (Multi-task Long Video Understanding Benchmark) for the comprehensive and in-depth evaluation of LVU. MLVU presents the following critical values: extit{1)} The substantial and flexible extension of video lengths, which enables the benchmark to evaluate LVU performance across a wide range of durations. extit{2)} The inclusion of various video genres, e.g., movies, surveillance footage, egocentric videos, cartoons, game videos, etc., which reflects the models' LVU performances in different scenarios. extit{3)} The development of diversified evaluation tasks, which enables a comprehensive examination of MLLMs' key abilities in long-video understanding. The empirical study with 23 latest MLLMs reveals significant room for improvement in today's technique, as all existing methods struggle with most of the evaluation tasks and exhibit severe performance degradation when handling longer videos. Additionally, it suggests that factors such as context length, image-understanding ability, and the choice of LLM backbone can play critical roles in future advancements. We anticipate that MLVU will advance the research of long video understanding by providing a comprehensive and in-depth analysis of MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Video Understanding
Long Video Analysis
Multi-task Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLVU Benchmark
Long Video Understanding
Diverse Video Types
🔎 Similar Papers
No similar papers found.
Junjie Zhou
Junjie Zhou
Nanjing University
Computer VisionMachine Learning
Yan Shu
Yan Shu
University of Trento << Harbin Institute of Technology
Vision and LanguageMulti-modal LearningVideo UnderstandingOCRRemote Sensing
B
Bo Zhao
Beijing Academy of Artificial Intelligence
Boya Wu
Boya Wu
Beijing Academy of Artificial Intelligence
Shitao Xiao
Shitao Xiao
BUPT
X
Xi Yang
Beijing Academy of Artificial Intelligence
Y
Yongping Xiong
Beijing University of Posts and Telecommunications
B
Bo Zhang
College of Computer Science and Technology, Zhejiang University
Tiejun Huang
Tiejun Huang
Professor,School of Computer Science, Peking University
Visual Information Processing
Z
Zheng Liu
Beijing Academy of Artificial Intelligence