LGR2: Language Guided Reward Relabeling for Accelerating Hierarchical Reinforcement Learning

📅 2024-06-09
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Hierarchical reinforcement learning (HRL) for natural-language-instructed complex robotic control suffers from non-stationary high-level rewards due to low-level policy evolution, severely hindering high-level policy learning. Method: We propose a language-guided high-level reward relabeling mechanism that leverages large language models (LLMs) to semantically parse instructions and dynamically map sparse environmental feedback into decoupled, instruction-aligned high-level reward signals—thereby fundamentally mitigating reward non-stationarity in HRL. Our approach integrates LLM-based semantic understanding, hierarchical control architecture, and language-informed dynamic reward shaping. Contribution/Results: Evaluated on sparse-reward navigation and manipulation tasks, our method achieves over 70% success rate—substantially outperforming state-of-the-art baselines. Real-robot experiments further demonstrate robustness and generalization in complex, unstructured physical environments.

Technology Category

Application Category

📝 Abstract
Developing interactive systems that utilize natural language instructions to solve complex robotic control tasks has long been a goal of the robotics community. While Large Language Models (LLMs) excel at logical reasoning, in-context learning, and code generation, translating high-level instructions into low-level robotic actions still remains challenging. Furthermore, solving such tasks often requires acquiring policies to execute diverse subtasks and integrating them to achieve the final objective. Hierarchical Reinforcement Learning (HRL) offers a promising solution for solving such tasks by enabling temporal abstraction and improved exploration. However, HRL suffers from non-stationarity caused by the changing lower-level behaviour, which hinders effective policy learning. We propose LGR2, a novel HRL framework that mitigates non-stationarity in HRL by using language-guided higher-level rewards that remain unaffected by the changing lower-level policy behaviour. To analyze the efficacy of our approach, we perform empirical analysis to demonstrate that LGR2 effectively mitigates non-stationarity in HRL and attains success rates exceeding 70% in challenging, sparsely-rewarded robotic navigation and manipulation environments, where other baselines typically fail to show significant progress. Finally, we perform real-world robotic experiments on complex tasks and demonstrate that LGR2 consistently outperforms the baselines.
Problem

Research questions and friction points this paper is trying to address.

Translate high-level language instructions into low-level robotic actions.
Mitigate non-stationarity in Hierarchical Reinforcement Learning (HRL).
Improve policy learning in complex robotic navigation and manipulation tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-guided higher-level rewards in HRL
Mitigates non-stationarity in policy learning
Achieves 70% success in robotic tasks
🔎 Similar Papers
No similar papers found.