🤖 AI Summary
This work addresses the challenge faced by autonomous agents in dynamic open-world environments, where the absence of operators corresponding to novel objects hinders effective planning. The authors propose a neuro-symbolic architecture that, for the first time, integrates the commonsense reasoning capabilities of large language models (LLMs) with symbolic planning and reinforcement learning. Specifically, the LLM is leveraged to identify missing operators, generate feasible plans, and automatically construct reward functions that guide the agent in learning new manipulation policies. Evaluated on operator discovery and continuous robotic manipulation tasks, the proposed method significantly outperforms state-of-the-art approaches, demonstrating robust autonomous adaptation and skill extension in previously unseen scenarios.
📝 Abstract
In dynamic open-world environments, autonomous agents often encounter novelties that hinder their ability to find plans to achieve their goals. Specifically, traditional symbolic planners fail to generate plans when the robot's planning domain lacks the operators that enable it to interact appropriately with novel objects in the environment. We propose a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model (LLM) to learn how to handle novel objects. In particular, we leverage the common sense reasoning capability of the LLM to identify missing operators, generate plans with the symbolic AI planner, and write reward functions to guide the reinforcement learning agent in learning control policies for newly identified operators. Our method outperforms the state-of-the-art methods in operator discovery as well as operator learning in continuous robotic domains.