🤖 AI Summary
To address the exploration-exploitation imbalance and low sample efficiency in sparse-reward reinforcement learning, this paper proposes LMGT: a novel framework that leverages a large language model (LLaMA-3) as an interpretable, non-parametric reward shaper. LMGT dynamically refines reward signals using the LLM’s embedded prior knowledge—e.g., from Wikipedia tutorials—without modifying the environment or designing handcrafted reward functions. Integrating prompt engineering with reward fine-tuning, LMGT is algorithm-agnostic and compatible with mainstream RL methods such as PPO and SAC. We evaluate it on the Housekeep embodied robotics simulation and multiple standard RL benchmarks. Experimental results demonstrate that LMGT significantly improves sample efficiency: it reduces required training samples by 42% and computational overhead by 35%. By transcending the limitations of traditional reward engineering and inverse reinforcement learning, LMGT establishes a new paradigm for knowledge-guided, efficient reinforcement learning.
📝 Abstract
The inherent uncertainty in the environmental transition model of Reinforcement Learning (RL) necessitates a delicate balance between exploration and exploitation. This balance is crucial for optimizing computational resources to accurately estimate expected rewards for the agent. In scenarios with sparse rewards, such as robotic control systems, achieving this balance is particularly challenging. However, given that many environments possess extensive prior knowledge, learning from the ground up in such contexts may be redundant. To address this issue, we propose Language Model Guided reward Tuning (LMGT), a novel, sample-efficient framework. LMGT leverages the comprehensive prior knowledge embedded in Large Language Models (LLMs) and their proficiency in processing non-standard data forms, such as wiki tutorials. By utilizing LLM-guided reward shifts, LMGT adeptly balances exploration and exploitation, thereby guiding the agent's exploratory behavior and enhancing sample efficiency. We have rigorously evaluated LMGT across various RL tasks and evaluated it in the embodied robotic environment Housekeep. Our results demonstrate that LMGT consistently outperforms baseline methods. Furthermore, the findings suggest that our framework can substantially reduce the computational resources required during the RL training phase.