Scalable Task Planning via Large Language Models and Structured World Representations

📅 2024-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational intractability of task-level planning in large-scale environments—caused by state-space explosion—this paper proposes an LLM-driven symbolic-neural hybrid planning framework. Our core innovation is the first use of a large language model as a lightweight semantic filter, jointly leveraging structured world modeling and commonsense reasoning to enable semantic-aware state-space pruning. This approach preserves the interpretability of symbolic planning while substantially improving generalization and computational efficiency: planning time is reduced by 62% in a household simulation environment; and multi-step real-world manipulation tasks are successfully executed on a 7-DOF robotic arm, demonstrating practicality and robustness in complex physical settings. The framework bridges high-level semantic reasoning with low-level control, enabling scalable and interpretable task planning without sacrificing execution fidelity.

Technology Category

Application Category

📝 Abstract
Planning methods struggle with computational intractability in solving task-level problems in large-scale environments. This work explores leveraging the commonsense knowledge encoded in LLMs to empower planning techniques to deal with these complex scenarios. We achieve this by efficiently using LLMs to prune irrelevant components from the planning problem's state space, substantially simplifying its complexity. We demonstrate the efficacy of this system through extensive experiments within a household simulation environment, alongside real-world validation using a 7-DoF manipulator (video https://youtu.be/6ro2UOtOQS4).
Problem

Research questions and friction points this paper is trying to address.

Scalable task planning in large environments
Leveraging LLMs for commonsense knowledge
Pruning state space to simplify complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs prune state space
Structured world representations simplify
Household simulation validates efficacy
R
Rodrigo P'erez-Dattari
Department of Cognitive Robotics, Delft University of Technology, The Netherlands
Zhaoting Li
Zhaoting Li
TU Delft, Cognitive Robotics
RoboticsImitation LearningMotion PlanningHuman robot interaction
R
Robert Babuvska
Department of Cognitive Robotics, Delft University of Technology, The Netherlands; Czech Institute of Informatics, Robotics, and Cybernetics, Czech Technical University in Prague, Czech Republic
Jens Kober
Jens Kober
Associate Professor, CoR, TU Delft
RoboticsMachine Learning
C
C. D. Santina
Department of Cognitive Robotics, Delft University of Technology, The Netherlands; Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Wessling, Germany