TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 22
Influential: 1
📄 PDF
🤖 AI Summary
Existing video understanding benchmarks overestimate multimodal foundation models’ (MFMs) temporal reasoning capabilities, as their questions can often be answered using single frames, sparse frame sampling, or out-of-order frames—failing to assess continuous dynamic modeling. Method: We introduce TOMATO, the first rigorous video temporal reasoning benchmark, comprising 1,417 human-annotated videos with high temporal dependency across six dynamic understanding tasks. We propose three novel evaluation principles—multi-frame gain, frame-order sensitivity, and frame-information disparity—to design structured temporal question-answering tasks and establish cross-task human baselines. Contribution/Results: Experiments reveal that current MFMs severely lack frame-order awareness and dynamic integration capability: the best-performing model lags human accuracy by 57.3%. TOMATO establishes a diagnostic, quantitative paradigm for evaluating temporal reasoning in video understanding models.

Technology Category

Application Category

📝 Abstract
Existing benchmarks often highlight the remarkable performance achieved by state-of-the-art Multimodal Foundation Models (MFMs) in leveraging temporal context for video understanding. However, how well do the models truly perform visual temporal reasoning? Our study of existing benchmarks shows that this capability of MFMs is likely overestimated as many questions can be solved by using a single, few, or out-of-order frames. To systematically examine current visual temporal reasoning tasks, we propose three principles with corresponding metrics: (1) Multi-Frame Gain, (2) Frame Order Sensitivity, and (3) Frame Information Disparity. Following these principles, we introduce TOMATO, Temporal Reasoning Multimodal Evaluation, a novel benchmark crafted to rigorously assess MFMs' temporal reasoning capabilities in video understanding. TOMATO comprises 1,484 carefully curated, human-annotated questions spanning six tasks (i.e., action count, direction, rotation, shape&trend, velocity&frequency, and visual cues), applied to 1,417 videos, including 805 self-recorded and -generated videos, that encompass human-centric, real-world, and simulated scenarios. Our comprehensive evaluation reveals a human-model performance gap of 57.3% with the best-performing model. Moreover, our in-depth analysis uncovers more fundamental limitations beyond this gap in current MFMs. While they can accurately recognize events in isolated frames, they fail to interpret these frames as a continuous sequence. We believe TOMATO will serve as a crucial testbed for evaluating the next-generation MFMs and as a call to the community to develop AI systems capable of comprehending human world dynamics through the video modality.
Problem

Research questions and friction points this paper is trying to address.

Assessing visual temporal reasoning capabilities in multimodal foundation models
Evaluating if models truly understand temporal context in video sequences
Measuring models' ability to interpret frames as continuous sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed three principles with metrics
Introduced TOMATO benchmark for evaluation
Comprehensive evaluation across six tasks
🔎 Similar Papers
No similar papers found.