VNU-Bench: A Benchmarking Dataset for Multi-Source Multimodal News Video Understanding

πŸ“… 2026-01-06
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing benchmarks for news video understanding are limited to single-source, intra-video reasoning, which falls short of the demands of real-world scenarios requiring multi-source, cross-video news analysis. To address this gap, this work introduces the first multi-source cross-video news understanding task and presents a novel multimodal benchmark dataset tailored to this challenge, comprising 429 news groups, 1,405 videos, and 2,501 high-quality questions. The dataset incorporates newly designed contrastive, alignment, and synthesis-type questions, generated through a hybrid human-model question-answering pipeline that ensures both data quality and scalability. Experimental results demonstrate that current multimodal large language models perform substantially below expectations on this benchmark, underscoring the task’s difficulty and its potential to drive future research in multimodal news comprehension.

Technology Category

Application Category

πŸ“ Abstract
News videos are carefully edited multimodal narratives that combine narration, visuals, and external quotations into coherent storylines. In recent years, there have been significant advances in evaluating multimodal large language models (MLLMs) for news video understanding. However, existing benchmarks largely focus on single-source, intra-video reasoning, where each report is processed in isolation. In contrast, real-world news consumption is inherently multi-sourced: the same event is reported by different outlets with complementary details, distinct narrative choices, and sometimes conflicting claims that unfold over time. Robust news understanding, therefore, requires models to compare perspectives from different sources, align multimodal evidence across sources, and synthesize multi-source information. To fill this gap, we introduce VNU-Bench, the first benchmark for multi-source, cross-video understanding in the news domain. We design a set of new question types that are unique in testing models'ability of understanding multi-source multimodal news from a variety of different angles. We design a novel hybrid human-model QA generation process that addresses the issues of scalability and quality control in building a large dataset for cross-source news understanding. The dataset comprises 429 news groups, 1,405 videos, and 2,501 high-quality questions. Comprehensive evaluation of both closed- and open-source multimodal models shows that VNU-Bench poses substantial challenges for current MLLMs.
Problem

Research questions and friction points this paper is trying to address.

multi-source
multimodal
news video understanding
cross-video reasoning
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-source multimodal understanding
cross-video reasoning
news video benchmark
hybrid QA generation
multimodal large language models
πŸ”Ž Similar Papers
No similar papers found.