ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models

📅 2024-03-14
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) suffers from low sample efficiency and unstable convergence in robotic manipulation, while foundation models (FMs) lack sufficient physical and spatial reasoning capabilities. Method: We propose ExploRLLM—a novel paradigm integrating large language models (LLMs) with residual RL. The LLM generates zero-shot policy code and compact semantic representations, while a residual RL agent models physical dynamics and corrects LLM-induced modeling biases, enabling decoupled optimization of policy generation and physics-aware compensation. ExploRLLM introduces the first LLM-guided exploration mechanism, facilitating zero-shot sim-to-real transfer. Results: On tabletop manipulation tasks, ExploRLLM significantly outperforms both FM-only policies and classical RL baselines. Real-world experiments validate its zero-shot cross-domain generalization from simulation to reality, substantially improving training efficiency and policy robustness.

Technology Category

Application Category

📝 Abstract
In robot manipulation, Reinforcement Learning (RL) often suffers from low sample efficiency and uncertain convergence, especially in large observation and action spaces. Foundation Models (FMs) offer an alternative, demonstrating promise in zero-shot and few-shot settings. However, they can be unreliable due to limited physical and spatial understanding. We introduce ExploRLLM, a method that combines the strengths of both paradigms. In our approach, FMs improve RL convergence by generating policy code and efficient representations, while a residual RL agent compensates for the FMs' limited physical understanding. We show that ExploRLLM outperforms both policies derived from FMs and RL baselines in table-top manipulation tasks. Additionally, real-world experiments show that the policies exhibit promising zero-shot sim-to-real transfer. Supplementary material is available at https://explorllm.github.io.
Problem

Research questions and friction points this paper is trying to address.

Improves RL sample efficiency in robot manipulation
Addresses unreliable FM physical understanding
Enhances sim-to-real transfer in policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines FMs and RL for efficient policy generation
Uses residual RL to compensate FM limitations
Enables zero-shot sim-to-real transfer in robotics
🔎 Similar Papers
No similar papers found.
R
Runyu Ma
Cognitive Robotics, Delft University of Technology, The Netherlands
J
Jelle Luijkx
Cognitive Robotics, Delft University of Technology, The Netherlands
Zlatan Ajanović
Zlatan Ajanović
RWTH Aachen University, ex: TU Delft, TU Graz, UNSA
AI and RoboticsSearchExploration in RLTask and Motion PlanningOptimal Control
Jens Kober
Jens Kober
Associate Professor, CoR, TU Delft
RoboticsMachine Learning