MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current large vision-language models (LVLMs) struggle to accurately comprehend temporal execution order (TEO) in real-world task planning and lack high-quality, multimodal, graph-structured evaluation benchmarks. To this end, we introduce MATEO—the first benchmark representing multimodal TEO as directed acyclic graphs—constructed by professional editors who curate step-by-step recipes with images, complemented by a scalable crowdsourcing pipeline for annotating temporal dependencies. MATEO overcomes the limitations of traditional linear chains or text-only settings, enabling fine-grained evaluation of LVLMs’ complex planning capabilities. Using a standardized construction protocol and evaluation framework, we systematically assess six state-of-the-art LVLMs, analyzing the impact of model scale, context length, input structure, and fine-tuning strategies, thereby revealing critical bottlenecks and promising directions for improving multimodal temporal reasoning.

Technology Category

Application Category

📝 Abstract
AI agents need to plan to achieve complex goals that involve orchestrating perception, sub-goal decomposition, and execution. These plans consist of ordered steps structured according to a Temporal Execution Order (TEO, a directed acyclic graph that ensures each step executes only after its preconditions are satisfied. Existing research on foundational models'understanding of temporal execution is limited to automatically derived annotations, approximations of the TEO as a linear chain, or text-only inputs. To address this gap, we introduce MATEO (MultimodAl Temporal Execution Order), a benchmark designed to assess and improve the temporal reasoning abilities of Large Vision Language Models (LVLMs) required for real-world planning. We acquire a high-quality professional multimodal recipe corpus, authored through a standardized editorial process that decomposes instructions into discrete steps, each paired with corresponding images. We collect TEO annotations as graphs by designing and using a scalable crowdsourcing pipeline. Using MATEO, we evaluate six state-of-the-art LVLMs across model scales, varying language context, multimodal input structure, and fine-tuning strategies.
Problem

Research questions and friction points this paper is trying to address.

temporal reasoning
planning
Large Vision Language Models
Temporal Execution Order
multimodal benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Reasoning
Multimodal Benchmark
Large Vision Language Models
Temporal Execution Order
Planning