ET-Plan-Bench: Embodied Task-level Planning Benchmark Towards Spatial-Temporal Cognition with Foundation Models

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately evaluate large language models’ (LLMs) joint spatio-temporal-causal reasoning capabilities in embodied tasks. Method: We introduce the first embodied task-level planning benchmark explicitly designed for spatio-temporal cognition, featuring controllable tasks across multiple difficulty and complexity levels. It systematically assesses LLMs on spatial relational constraints, occlusion-aware goal handling, and action-sequencing causal reasoning. Our fine-grained evaluation framework integrates spatial constraint modeling with spatio-temporal causal diagnosis, enabling closed-loop interaction, real-time feedback, and dynamic replanning across simulators including AI2-THOR and Habitat. Contribution/Results: Experiments reveal that state-of-the-art LLMs (e.g., GPT-4, LLaMA, Mistral) perform reasonably well on simple navigation but exhibit significant performance degradation on tasks demanding deep spatio-temporal-causal understanding—demonstrating the benchmark’s rigor and diagnostic utility for embodied AI evaluation.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models (LLMs) have spurred numerous attempts to apply these technologies to embodied tasks, particularly focusing on high-level task planning and task decomposition. To further explore this area, we introduce a new embodied task planning benchmark, ET-Plan-Bench, which specifically targets embodied task planning using LLMs. It features a controllable and diverse set of embodied tasks varying in different levels of difficulties and complexities, and is designed to evaluate two critical dimensions of LLMs' application in embodied task understanding: spatial (relation constraint, occlusion for target objects) and temporal&causal understanding of the sequence of actions in the environment. By using multi-source simulators as the backend simulator, it can provide immediate environment feedback to LLMs, which enables LLMs to interact dynamically with the environment and re-plan as necessary. We evaluated the state-of-the-art open source and closed source foundation models, including GPT-4, LLAMA and Mistral on our proposed benchmark. While they perform adequately well on simple navigation tasks, their performance can significantly deteriorate when faced with tasks that require a deeper understanding of spatial, temporal, and causal relationships. Thus, our benchmark distinguishes itself as a large-scale, quantifiable, highly automated, and fine-grained diagnostic framework that presents a significant challenge to the latest foundation models. We hope it can spark and drive further research in embodied task planning using foundation models.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' spatial-temporal task planning
Assess causal understanding in embodied tasks
Benchmark foundation models' performance in complex scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for embodied task planning
Spatial-temporal cognition benchmark
Dynamic environment interaction feedback
Lingfeng Zhang
Lingfeng Zhang
PhD student at Tsinghua University
embodied ai
Y
Yuening Wang
Huawei Noah’s Ark Lab
H
Hongjian Gu
Huawei Noah’s Ark Lab
A
Atia Hamidizadeh
Huawei Noah’s Ark Lab
Z
Zhanguang Zhang
Huawei Noah’s Ark Lab
Y
Yuecheng Liu
Huawei Noah’s Ark Lab
Y
Yutong Wang
Huawei Noah’s Ark Lab
D
David Gamaliel Arcos Bravo
Huawei Noah’s Ark Lab
J
Junyi Dong
Huawei Cloud
Shunbo Zhou
Shunbo Zhou
Huawei | The Chinese University of Hong Kong
RoboticsEmbodied AIAutonomous Navigation
Tongtong Cao
Tongtong Cao
Researcher, Huawei Noah's Ark Lab
RoboticsEmbodied AIAutonomous driving
Yuzheng Zhuang
Yuzheng Zhuang
Senior Researcher @ Huawei Noah's Ark Lab
Reinforcement LearningOptimizationAutonomous DrivingCommunication
Y
Yingxue Zhang
Huawei Noah’s Ark Lab
Jianye Hao
Jianye Hao
Huawei Noah's Ark Lab/Tianjin University
Multiagent SystemsEmbodied AI