HELM: Human-Preferred Exploration with Language Models

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous robotic exploration in dynamic, unknown environments struggles to adaptively respond to diverse human preferences—such as region prioritization or efficiency–completeness trade-offs—without manual intervention. Method: This paper proposes the first end-to-end framework that deeply integrates large language models (LLMs) into the exploration closed loop. It synergistically combines LLM-based semantic understanding, natural language instruction parsing, real-time semantic mapping, and motion planning to enable zero-shot, retraining-free preference modulation—directly translating human natural language intent into adaptive exploration policies. Results: In multi-scenario experiments, the method achieves task success rates comparable to state-of-the-art traditional approaches while enabling millisecond-level preference switching. Its core contribution is the first generalizable, fine-tuning-free language-to-exploration-policy mapping mechanism, eliminating reliance on hand-crafted parameters or offline training—thereby significantly enhancing human–robot collaboration flexibility and practicality.

Technology Category

Application Category

📝 Abstract
In autonomous exploration tasks, robots are required to explore and map unknown environments while efficiently planning in dynamic and uncertain conditions. Given the significant variability of environments, human operators often have specific preference requirements for exploration, such as prioritizing certain areas or optimizing for different aspects of efficiency. However, existing methods struggle to accommodate these human preferences adaptively, often requiring extensive parameter tuning or network retraining. With the recent advancements in Large Language Models (LLMs), which have been widely applied to text-based planning and complex reasoning, their potential for enhancing autonomous exploration is becoming increasingly promising. Motivated by this, we propose an LLM-based human-preferred exploration framework that seamlessly integrates a mobile robot system with LLMs. By leveraging the reasoning and adaptability of LLMs, our approach enables intuitive and flexible preference control through natural language while maintaining a task success rate comparable to state-of-the-art traditional methods. Experimental results demonstrate that our framework effectively bridges the gap between human intent and policy preference in autonomous exploration, offering a more user-friendly and adaptable solution for real-world robotic applications.
Problem

Research questions and friction points this paper is trying to address.

Adapting robots to human exploration preferences in dynamic environments.
Integrating LLMs for intuitive natural language control in robotic exploration.
Bridging human intent and policy preference in autonomous robotic systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based human-preferred exploration framework
Natural language control for robot preferences
Seamless integration of LLMs with mobile robots
🔎 Similar Papers
No similar papers found.
Shuhao Liao
Shuhao Liao
Beihang University
Multi-agent SystemsReinforcement LearningRobot learning
X
Xuxin Lv
Hangzhou International Innovation Institute, Beihang University, China
Yuhong Cao
Yuhong Cao
National University of Singapore
Robot learningPath Planing
J
Jeric Lew
Department of Mechanical Engineering, National University of Singapore, Singapore
W
Wenjun Wu
Hangzhou International Innovation Institute, Beihang University, China
Guillaume Sartoretti
Guillaume Sartoretti
Assistant Professor, National University of Singapore (NUS), Mechanical Engineering Dpt
Multi-Agent SystemsRoboticsSwarm IntelligenceDistributed ControlDistributed Learning