HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing VideoQA benchmarks are vulnerable to single-cue or linguistic prior shortcuts, failing to rigorously evaluate multi-temporal visual evidence integration. To address this, we introduce the first VideoQA benchmark that explicitly enforces multi-evidence temporal integration: each question requires fusing visual cues from ≥3 non-overlapping video segments, eliminating frame-level or language-only shortcuts. We propose the Minimum Required Frame Set (MRFS) to quantify integration difficulty and design 12 compositional reasoning tasks targeting cross-temporal identity binding, temporal relations, and co-occurrence verification. Questions follow a five-choice format, accompanied by fine-grained evidence annotations, computationally tractable MRFS modeling, and a dual-bottleneck diagnostic framework—distinguishing retrieval failure from fusion failure. Evaluated on 13 state-of-the-art video foundation models, accuracy ranges only from 31% to 42%, substantially below human performance, revealing evidence omission and integration failure as fundamental bottlenecks.

Technology Category

Application Category

📝 Abstract
Video Large Language Models (Video-LLMs) are rapidly improving, yet current Video Question Answering (VideoQA) benchmarks often allow questions to be answered from a single salient cue, under-testing reasoning that must aggregate multiple, temporally separated visual evidence. We present HERBench, a VideoQA benchmark purpose-built to assess multi-evidence integration across time. Each question requires aggregating at least three non-overlapping evidential cues across distinct video segments, so neither language priors nor a single snapshot can suffice. HERBench comprises 26K five-way multiple-choice questions organized into twelve compositional tasks that probe identity binding, cross-entity relations, temporal ordering, co-occurrence verification, and counting. To make evidential demand measurable, we introduce the Minimum Required Frame-Set (MRFS), the smallest number of frames a model must fuse to answer correctly, and show that HERBench imposes substantially higher demand than prior datasets (mean MRFS 5.5 vs. 2.6-4.2). Evaluating 13 state-of-the-art Video-LLMs on HERBench reveals pervasive failures: accuracies of 31-42% are only slightly above the 20% random-guess baseline. We disentangle this failure into two critical bottlenecks: (1) a retrieval deficit, where frame selectors overlook key evidence, and (2) a fusion deficit, where models fail to integrate information even when all necessary evidence is provided. By making cross-time evidence both unavoidable and quantifiable, HERBench establishes a principled target for advancing robust, compositional video understanding.
Problem

Research questions and friction points this paper is trying to address.

Assesses multi-evidence integration across time in VideoQA
Measures minimal frames needed for correct video understanding
Identifies retrieval and fusion deficits in Video-LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

HERBench benchmark tests multi-evidence integration in VideoQA
Introduces Minimum Required Frame-Set to measure evidence demand
Identifies retrieval and fusion deficits in Video-LLMs performance
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30