🤖 AI Summary
Existing video understanding benchmarks suffer from short video durations, limited domain diversity, and narrow task coverage, hindering comprehensive evaluation of multimodal large language models’ (MLLMs) long-video, multi-task reasoning capabilities. To address this, we introduce MLVU—the first holistic benchmark for multi-task long-video understanding—encompassing diverse video genres (e.g., films, surveillance footage, first-person videos) and supporting systematic evaluation across durations ranging from minutes to hours and 12 fine-grained tasks, including temporal localization, causal reasoning, and cross-modal question answering. MLVU establishes the first standardized assessment of MLLMs’ long-range temporal modeling, cross-modal alignment, and task generalization. Extensive experiments on 23 state-of-the-art MLLMs reveal a pronounced performance degradation with increasing video length, identifying context window capacity, visual encoder capability, and LLM backbone selection as fundamental bottlenecks. MLVU provides a reproducible benchmark and clear, actionable directions for advancement.
📝 Abstract
The evaluation of Long Video Understanding (LVU) performance poses an important but challenging research problem. Despite previous efforts, the existing video understanding benchmarks are severely constrained by several issues, especially the insufficient lengths of videos, a lack of diversity in video types and evaluation tasks, and the inappropriateness for evaluating LVU performances. To address the above problems, we propose a new benchmark called MLVU (Multi-task Long Video Understanding Benchmark) for the comprehensive and in-depth evaluation of LVU. MLVU presents the following critical values: extit{1)} The substantial and flexible extension of video lengths, which enables the benchmark to evaluate LVU performance across a wide range of durations. extit{2)} The inclusion of various video genres, e.g., movies, surveillance footage, egocentric videos, cartoons, game videos, etc., which reflects the models' LVU performances in different scenarios. extit{3)} The development of diversified evaluation tasks, which enables a comprehensive examination of MLLMs' key abilities in long-video understanding. The empirical study with 23 latest MLLMs reveals significant room for improvement in today's technique, as all existing methods struggle with most of the evaluation tasks and exhibit severe performance degradation when handling longer videos. Additionally, it suggests that factors such as context length, image-understanding ability, and the choice of LLM backbone can play critical roles in future advancements. We anticipate that MLVU will advance the research of long video understanding by providing a comprehensive and in-depth analysis of MLLMs.