Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs

📅 2024-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video multimodal large language models (MLLMs) face critical evaluation bottlenecks: high data construction costs, difficulty in disentangling fine-grained capabilities, and coarse-grained assessment. To address these, we propose VideoNIAH—a novel synthetic evaluation framework that decouples *video content* from *query-response generation*. By injecting irrelevant visual “needles” into synthetic videos, VideoNIAH automatically generates diverse, multi-length, skill-specific question-answer pairs, enabling low-cost, scalable, and fine-grained evaluation. It is the first framework to systematically assess three core temporal-spatial competencies: temporal awareness, temporal ordering, and spatio-temporal consistency, supported by a multi-task evaluation protocol covering retrieval, sorting, and counting. Leveraging VideoNIAH, we construct VNBench—an open-source benchmark—that exposes substantial performance disparities among state-of-the-art video MLLMs across these dimensions. Our analysis yields actionable training optimization insights, advancing standardization in video understanding evaluation.

Technology Category

Application Category

📝 Abstract
Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models during iterative development due to the high cost of constructing datasets and the difficulty in isolating specific skills. In this paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark construction framework through synthetic video generation. VideoNIAH decouples video content from their query-responses by inserting unrelated visual 'needles' into original videos. The framework automates the generation of query-response pairs using predefined rules, minimizing manual labor. The queries focus on specific aspects of video understanding, enabling more skill-specific evaluations. The separation between video content and the queries also allow for increased video variety and evaluations across different lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which includes tasks such as retrieval, ordering, and counting to evaluate three key aspects of video understanding: temporal perception, chronological ordering, and spatio-temporal coherence. We conduct a comprehensive evaluation of both proprietary and open-source models, uncovering significant differences in their video understanding capabilities across various tasks. Additionally, we perform an in-depth analysis of the test results and model configurations. Based on these findings, we provide some advice for improving video MLLM training, offering valuable insights to guide future research and model development. The code and data are available at https://github.com/joez17/VideoNIAH.
Problem

Research questions and friction points this paper is trying to address.

Develops scalable synthetic evaluator for video MLLMs.
Addresses inefficiency in current video benchmarks.
Enables skill-specific evaluations in video understanding.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic video generation for benchmark creation
Automated query-response pair generation
Skill-specific video understanding evaluation
🔎 Similar Papers
No similar papers found.
Zijia Zhao
Zijia Zhao
Institute of Automation, Chinese Academy Sciences (CASIA)
Multimodal learning
H
Haoyu Lu
Gaoling School of Artificial Intelligence, Renmin University of China
Yuqi Huo
Yuqi Huo
Bytedance Inc.
multi-modal pretraining
Yifan Du
Yifan Du
Renmin University of China
Vision Language ModelMLLM
Tongtian Yue
Tongtian Yue
Institute of Automation, Chinese Academy of Sciences
Multimodal PretrainVisual-Language
L
Longteng Guo
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Bingning Wang
Bingning Wang
Baichuan Inc.
NLPQuestion AnsweringLarge language model
W
Weipeng Chen
Baichuan Inc.
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences