MVPBench: A Multi-Video Perception Evaluation Benchmark for Multi-Modal Video Understanding

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video understanding benchmarks are largely confined to single videos or static images, making them inadequate for evaluating models’ capacity to comprehend complex cross-temporal and cross-view interactions across multiple videos. To address this gap, this work introduces the first comprehensive benchmark for multimodal multi-video perception, encompassing 14 subtasks and 5,000 structured question-answer pairs. The benchmark integrates both existing datasets and newly annotated video content, covering a diverse range of visual scenarios. Experimental results demonstrate that current state-of-the-art multimodal large language models exhibit significant performance degradation when processing multi-video inputs, revealing critical limitations in their ability to perform coordinated understanding across multiple video streams. These findings underscore the necessity and challenge of the proposed benchmark in advancing multi-video reasoning capabilities.

Technology Category

Application Category

πŸ“ Abstract
The rapid progress of Large Language Models (LLMs) has spurred growing interest in Multi-modal LLMs (MLLMs) and motivated the development of benchmarks to evaluate their perceptual and comprehension abilities. Existing benchmarks, however, are limited to static images or single videos, overlooking the complex interactions across multiple videos. To address this gap, we introduce the Multi-Video Perception Evaluation Benchmark (MVPBench), a new benchmark featuring 14 subtasks across diverse visual domains designed to evaluate models on extracting relevant information from video sequences to make informed decisions. MVPBench includes 5K question-answering tests involving 2.7K video clips sourced from existing datasets and manually annotated clips. Extensive evaluations reveal that current models struggle to process multi-video inputs effectively, underscoring substantial limitations in their multi-video comprehension. We anticipate MVPBench will drive advancements in multi-video perception.
Problem

Research questions and friction points this paper is trying to address.

multi-video perception
multi-modal video understanding
evaluation benchmark
video comprehension
multi-video interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-video perception
multi-modal video understanding
benchmark
multi-modal LLMs
video reasoning
πŸ”Ž Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
P
Purui Bai
MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences
Tao Wu
Tao Wu
ShanghaiTech University
MEMS/NEMSProcessing & EDAMultiferroicsPiezoelectric Transducers
J
Jiayang Sun
MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences
Xinyue Liu
Xinyue Liu
Amazon
Data MiningMachine Learning
Huaibo Huang
Huaibo Huang
NLPR, MAIS, CASIA
Computer VisionGenerative ModelsLow-level VisionFace Recognition
R
Ran He
MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences