From Vague Instructions to Task Plans: A Feedback-Driven HRC Task Planning Framework based on LLMs

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-level, ambiguous natural language instructions in human-robot collaboration (HRC) hinder the generation of reliable task plans. Method: This paper proposes a dynamic planning framework integrating large language models (LLMs) with real-time human feedback. It introduces a novel “single-prompt–multi-task–multi-environment” generalization paradigm, enabling a concise, unified prompt to adapt across heterogeneous HRC scenarios. Crucially, it explicitly models user preferences within closed-loop planning via dynamic prompt engineering and structured task modeling to achieve intent-aligned adaptive plan refinement. Results: Experiments demonstrate significant improvements in plan feasibility and user satisfaction. The framework achieves cross-scenario generalization across diverse HRC tasks using only one short, fixed prompt—reducing manual prompt engineering effort by over 80%.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have demonstrated their potential as planners in human-robot collaboration (HRC) scenarios, offering a promising alternative to traditional planning methods. LLMs, which can generate structured plans by reasoning over natural language inputs, have the ability to generalize across diverse tasks and adapt to human instructions. This paper investigates the potential of LLMs to facilitate planning in the context of human-robot collaborative tasks, with a focus on their ability to reason from high-level, vague human inputs, and fine-tune plans based on real-time feedback. We propose a novel hybrid framework that combines LLMs with human feedback to create dynamic, context-aware task plans. Our work also highlights how a single, concise prompt can be used for a wide range of tasks and environments, overcoming the limitations of long, detailed structured prompts typically used in prior studies. By integrating user preferences into the planning loop, we ensure that the generated plans are not only effective but aligned with human intentions.
Problem

Research questions and friction points this paper is trying to address.

LLMs generate task plans from vague human instructions.
Dynamic task planning using real-time human feedback.
Single concise prompt for diverse tasks and environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate structured plans from natural language.
Hybrid framework combines LLMs with human feedback.
Single concise prompt adapts to diverse tasks.
🔎 Similar Papers
No similar papers found.