🤖 AI Summary
Reinforcement learning (RL) suffers from low sample efficiency and unstable convergence in robotic manipulation, while foundation models (FMs) lack sufficient physical and spatial reasoning capabilities. Method: We propose ExploRLLM—a novel paradigm integrating large language models (LLMs) with residual RL. The LLM generates zero-shot policy code and compact semantic representations, while a residual RL agent models physical dynamics and corrects LLM-induced modeling biases, enabling decoupled optimization of policy generation and physics-aware compensation. ExploRLLM introduces the first LLM-guided exploration mechanism, facilitating zero-shot sim-to-real transfer. Results: On tabletop manipulation tasks, ExploRLLM significantly outperforms both FM-only policies and classical RL baselines. Real-world experiments validate its zero-shot cross-domain generalization from simulation to reality, substantially improving training efficiency and policy robustness.
📝 Abstract
In robot manipulation, Reinforcement Learning (RL) often suffers from low sample efficiency and uncertain convergence, especially in large observation and action spaces. Foundation Models (FMs) offer an alternative, demonstrating promise in zero-shot and few-shot settings. However, they can be unreliable due to limited physical and spatial understanding. We introduce ExploRLLM, a method that combines the strengths of both paradigms. In our approach, FMs improve RL convergence by generating policy code and efficient representations, while a residual RL agent compensates for the FMs' limited physical understanding. We show that ExploRLLM outperforms both policies derived from FMs and RL baselines in table-top manipulation tasks. Additionally, real-world experiments show that the policies exhibit promising zero-shot sim-to-real transfer. Supplementary material is available at https://explorllm.github.io.