Instruction-Augmented Long-Horizon Planning: Embedding Grounding Mechanisms in Embodied Mobile Manipulation

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied humanoid robots face significant challenges in performing long-horizon mobile manipulation planning in unstructured real-world environments—particularly in achieving environment-grounded, quantitative understanding of affordances and autonomous decision-making without reliance on hand-crafted textual prompts. Method: This paper proposes an instruction-augmented embodied planning framework that integrates the abstract reasoning capabilities of large language models (LLMs) with multimodal (vision/tactile/pose) environmental grounding. It directly maps natural language instructions to PDDL planning problems constrained by quantitative affordance estimates, enabling closed-loop perception-decision-execution refinement. Contribution/Results: The framework introduces the first fully automated “semantics → quantified feasibility → action sequence” translation paradigm, eliminating manual prompt engineering. Evaluated on seven diverse manipulation skills in realistic long-horizon tasks, it achieves an average success rate exceeding 80%, significantly improving robot autonomy and generalization in unstructured physical environments.

Technology Category

Application Category

📝 Abstract
Enabling humanoid robots to perform long-horizon mobile manipulation planning in real-world environments based on embodied perception and comprehension abilities has been a longstanding challenge. With the recent rise of large language models (LLMs), there has been a notable increase in the development of LLM-based planners. These approaches either utilize human-provided textual representations of the real world or heavily depend on prompt engineering to extract such representations, lacking the capability to quantitatively understand the environment, such as determining the feasibility of manipulating objects. To address these limitations, we present the Instruction-Augmented Long-Horizon Planning (IALP) system, a novel framework that employs LLMs to generate feasible and optimal actions based on real-time sensor feedback, including grounded knowledge of the environment, in a closed-loop interaction. Distinct from prior works, our approach augments user instructions into PDDL problems by leveraging both the abstract reasoning capabilities of LLMs and grounding mechanisms. By conducting various real-world long-horizon tasks, each consisting of seven distinct manipulatory skills, our results demonstrate that the IALP system can efficiently solve these tasks with an average success rate exceeding 80%. Our proposed method can operate as a high-level planner, equipping robots with substantial autonomy in unstructured environments through the utilization of multi-modal sensor inputs.
Problem

Research questions and friction points this paper is trying to address.

Enabling robots to perform long-horizon mobile manipulation tasks.
Addressing limitations of LLM-based planners in understanding environments.
Developing a system for real-time, grounded, and optimal action planning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based planners with real-time sensor feedback
Augments user instructions into PDDL problems
Multi-modal sensor inputs for autonomous robot operation
🔎 Similar Papers
No similar papers found.