🤖 AI Summary
Existing video multimodal large language models (MLLMs) face critical evaluation bottlenecks: high data construction costs, difficulty in disentangling fine-grained capabilities, and coarse-grained assessment. To address these, we propose VideoNIAH—a novel synthetic evaluation framework that decouples *video content* from *query-response generation*. By injecting irrelevant visual “needles” into synthetic videos, VideoNIAH automatically generates diverse, multi-length, skill-specific question-answer pairs, enabling low-cost, scalable, and fine-grained evaluation. It is the first framework to systematically assess three core temporal-spatial competencies: temporal awareness, temporal ordering, and spatio-temporal consistency, supported by a multi-task evaluation protocol covering retrieval, sorting, and counting. Leveraging VideoNIAH, we construct VNBench—an open-source benchmark—that exposes substantial performance disparities among state-of-the-art video MLLMs across these dimensions. Our analysis yields actionable training optimization insights, advancing standardization in video understanding evaluation.
📝 Abstract
Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models during iterative development due to the high cost of constructing datasets and the difficulty in isolating specific skills. In this paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark construction framework through synthetic video generation. VideoNIAH decouples video content from their query-responses by inserting unrelated visual 'needles' into original videos. The framework automates the generation of query-response pairs using predefined rules, minimizing manual labor. The queries focus on specific aspects of video understanding, enabling more skill-specific evaluations. The separation between video content and the queries also allow for increased video variety and evaluations across different lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which includes tasks such as retrieval, ordering, and counting to evaluate three key aspects of video understanding: temporal perception, chronological ordering, and spatio-temporal coherence. We conduct a comprehensive evaluation of both proprietary and open-source models, uncovering significant differences in their video understanding capabilities across various tasks. Additionally, we perform an in-depth analysis of the test results and model configurations. Based on these findings, we provide some advice for improving video MLLM training, offering valuable insights to guide future research and model development. The code and data are available at https://github.com/joez17/VideoNIAH.