🤖 AI Summary
Existing research lacks systematic evaluation of multimodal large language models’ (MLLMs) video understanding capabilities. This paper introduces Video-MME, the first comprehensive, multidimensional benchmark for video analysis. It spans six visual domains and 30 fine-grained subcategories, supports videos from 11 seconds to one hour, integrates frame, audio, and subtitle modalities, and provides 2,700 expert-annotated question-answer pairs. Video-MME achieves four first-of-their-kind systematic advances: (i) diverse question types, (ii) broad temporal coverage, (iii) multimodal input breadth, and (iv) high-quality annotation—enabled by multi-stage verification, cross-modal alignment, and hierarchical domain design. We evaluate leading MLLMs—including GPT-4 series, Gemini 1.5 Pro, and InternVL-Chat-V1.5—on 900 videos (254 hours). Results show Gemini 1.5 Pro achieves the highest performance, while revealing critical bottlenecks in long-horizon temporal modeling and cross-modal reasoning across current MLLMs.
📝 Abstract
In the quest for artificial general intelligence, Multi-modal Large Language Models (MLLMs) have emerged as a focal point in recent advancements. However, the predominant focus remains on developing their capabilities in static image understanding. The potential of MLLMs in processing sequential visual data is still insufficiently explored, highlighting the absence of a comprehensive, high-quality assessment of their performance. In this paper, we introduce Video-MME, the first-ever full-spectrum, Multi-Modal Evaluation benchmark of MLLMs in Video analysis. Our work distinguishes from existing benchmarks through four key features: 1) Diversity in video types, spanning 6 primary visual domains with 30 subfields to ensure broad scenario generalizability; 2) Duration in temporal dimension, encompassing both short-, medium-, and long-term videos, ranging from 11 seconds to 1 hour, for robust contextual dynamics; 3) Breadth in data modalities, integrating multi-modal inputs besides video frames, including subtitles and audios, to unveil the all-round capabilities of MLLMs; 4) Quality in annotations, utilizing rigorous manual labeling by expert annotators to facilitate precise and reliable model assessment. 900 videos with a total of 254 hours are manually selected and annotated by repeatedly viewing all the video content, resulting in 2,700 question-answer pairs. With Video-MME, we extensively evaluate various state-of-the-art MLLMs, including GPT-4 series and Gemini 1.5 Pro, as well as open-source image models like InternVL-Chat-V1.5 and video models like LLaVA-NeXT-Video. Our experiments reveal that Gemini 1.5 Pro is the best-performing commercial model, significantly outperforming the open-source models. Our dataset along with these findings underscores the need for further improvements in handling longer sequences and multi-modal data. Project Page: https://video-mme.github.io