Reasoning over Video: Evaluating How MLLMs Extract, Integrate, and Reconstruct Spatiotemporal Evidence

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in existing video reasoning benchmarks, which predominantly focus on extractive tasks where answers are directly observable, thereby neglecting systematic evaluation of abstract spatiotemporal reasoning in multimodal large language models—specifically, their ability to integrate temporal observations, synthesize fragmented cues, and infer implicit spatial and contextual structures. To this end, the paper formally defines abstract spatiotemporal reasoning in videos, introduces a fine-grained evaluation taxonomy, and presents VAEX-BENCH, the first controllable, synthetically generated first-person video benchmark encompassing three hierarchical levels (objects, rooms, and floorplans) and five corresponding reasoning tasks. Experimental results reveal that current models perform substantially worse on abstract reasoning tasks compared to extractive ones, highlighting fundamental limitations in integrating spatiotemporal evidence and inferring latent structural relationships.

Technology Category

Application Category

📝 Abstract
The growing interest in embodied agents increases the demand for spatiotemporal video understanding, yet existing benchmarks largely emphasize extractive reasoning, where answers can be explicitly presented within spatiotemporal events. It remains unclear whether multimodal large language models can instead perform abstractive spatiotemporal reasoning, which requires integrating observations over time, combining dispersed cues, and inferring implicit spatial and contextual structure. To address this gap, we formalize abstractive spatiotemporal reasoning from videos by introducing a structured evaluation taxonomy that systematically targets its core dimensions and construct a controllable, scenario-driven synthetic egocentric video dataset tailored to evaluate abstractive spatiotemporal reasoning capabilities, spanning object-, room-, and floor-plan-level scenarios. Based on this framework, we present VAEX-BENCH, a benchmark comprising five abstractive reasoning tasks together with their extractive counterparts. Our extensive experiments compare the performance of state-of-the-art MLLMs under extractive and abstractive settings, exposing their limitations on abstractive tasks and providing a fine-grained analysis of the underlying bottlenecks. The dataset will be released soon.
Problem

Research questions and friction points this paper is trying to address.

abstractive spatiotemporal reasoning
video understanding
multimodal large language models
spatiotemporal evidence
reasoning benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

abstractive spatiotemporal reasoning
multimodal large language models
structured evaluation taxonomy
synthetic egocentric video dataset
VAEX-BENCH
🔎 Similar Papers
No similar papers found.
S
Seunghwan Bang
Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
Hwanjun Song
Hwanjun Song
Assistant Professor, KAIST
LLMTrustworthy AIHuman-AI AlignmentData-centric AI