🤖 AI Summary
Existing benchmarks inadequately evaluate large language models’ (LLMs) joint spatio-temporal-causal reasoning capabilities in embodied tasks. Method: We introduce the first embodied task-level planning benchmark explicitly designed for spatio-temporal cognition, featuring controllable tasks across multiple difficulty and complexity levels. It systematically assesses LLMs on spatial relational constraints, occlusion-aware goal handling, and action-sequencing causal reasoning. Our fine-grained evaluation framework integrates spatial constraint modeling with spatio-temporal causal diagnosis, enabling closed-loop interaction, real-time feedback, and dynamic replanning across simulators including AI2-THOR and Habitat. Contribution/Results: Experiments reveal that state-of-the-art LLMs (e.g., GPT-4, LLaMA, Mistral) perform reasonably well on simple navigation but exhibit significant performance degradation on tasks demanding deep spatio-temporal-causal understanding—demonstrating the benchmark’s rigor and diagnostic utility for embodied AI evaluation.
📝 Abstract
Recent advancements in Large Language Models (LLMs) have spurred numerous attempts to apply these technologies to embodied tasks, particularly focusing on high-level task planning and task decomposition. To further explore this area, we introduce a new embodied task planning benchmark, ET-Plan-Bench, which specifically targets embodied task planning using LLMs. It features a controllable and diverse set of embodied tasks varying in different levels of difficulties and complexities, and is designed to evaluate two critical dimensions of LLMs' application in embodied task understanding: spatial (relation constraint, occlusion for target objects) and temporal&causal understanding of the sequence of actions in the environment. By using multi-source simulators as the backend simulator, it can provide immediate environment feedback to LLMs, which enables LLMs to interact dynamically with the environment and re-plan as necessary. We evaluated the state-of-the-art open source and closed source foundation models, including GPT-4, LLAMA and Mistral on our proposed benchmark. While they perform adequately well on simple navigation tasks, their performance can significantly deteriorate when faced with tasks that require a deeper understanding of spatial, temporal, and causal relationships. Thus, our benchmark distinguishes itself as a large-scale, quantifiable, highly automated, and fine-grained diagnostic framework that presents a significant challenge to the latest foundation models. We hope it can spark and drive further research in embodied task planning using foundation models.