LLM as BT-Planner: Leveraging LLMs for Behavior Tree Generation in Robot Task Planning

๐Ÿ“… 2024-09-16
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Long-horizon robotic assembly tasks suffer from lengthy temporal dependencies and complex part interrelationships, rendering behavior tree (BT) planning heavily reliant on manual designโ€”resulting in low efficiency and poor scalability. This paper proposes LLM-as-BT-Planner, the first systematic framework leveraging large language models (LLMs) to autonomously generate semantically valid, modular BTs. We introduce four novel in-context learning strategies and empirically validate the efficacy of supervised fine-tuning for small-scale LLMs on BT generation. To ensure executability and interpretability, our method jointly enforces BT syntactic constraints and instruction semantic parsing. Extensive experiments in simulation and on real robotic platforms demonstrate significant improvements over state-of-the-art LLM-based planners and handcrafted BT baselines: +28.6% task success rate, +34.1% BT structural accuracy, and enhanced execution robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Robotic assembly tasks remain an open challenge due to their long horizon nature and complex part relations. Behavior trees (BTs) are increasingly used in robot task planning for their modularity and flexibility, but creating them manually can be effort-intensive. Large language models (LLMs) have recently been applied to robotic task planning for generating action sequences, yet their ability to generate BTs has not been fully investigated. To this end, we propose LLM-as-BT-Planner, a novel framework that leverages LLMs for BT generation in robotic assembly task planning. Four in-context learning methods are introduced to utilize the natural language processing and inference capabilities of LLMs for producing task plans in BT format, reducing manual effort while ensuring robustness and comprehensibility. Additionally, we evaluate the performance of fine-tuned smaller LLMs on the same tasks. Experiments in both simulated and real-world settings demonstrate that our framework enhances LLMs' ability to generate BTs, improving success rate through in-context learning and supervised fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Automates behavior tree generation for robotic assembly tasks.
Reduces manual effort in creating behavior trees using LLMs.
Enhances LLMs' ability to generate robust and comprehensible task plans.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate Behavior Trees for robot tasks.
In-context learning enhances BT generation efficiency.
Fine-tuned smaller LLMs improve task success rates.
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jicong Ao
Chair of Robotics and Systems Intelligence, MIRMI - Munich Institute of Robotics and Machine Intelligence, Technical University of Munich, Germany
F
Fan Wu
Chair of Robotics and Systems Intelligence, MIRMI - Munich Institute of Robotics and Machine Intelligence, Technical University of Munich, Germany
Yansong Wu
Yansong Wu
TUM
roboticstactile manipulationrobot learningbehavior tree
Abdalla Swikir
Abdalla Swikir
Assistant Professor, MBZUAI
RoboticsControl TheoryFormal MethodsHybrid systems
Sami Haddadin
Sami Haddadin
MBZUAI
RoboticsAIControlNeurotechAutomating Science