🤖 AI Summary
Existing reinforcement learning exploration strategies suffer from insufficient task adaptability and dynamic responsiveness: predefined stochastic processes (e.g., ε-greedy, Gaussian noise) fail to model task-specific characteristics, while their evolution mechanisms—typically limited to fixed variance decay—are rigid and incapable of adapting to the agent’s real-time learning progress. To address this, we propose LLM-Driven Exploration (LDE), the first framework to integrate large language models into RL exploration. LDE employs prompt engineering to parse agent trajectories and autonomously generate, then periodically refine, task-specific, state-aware probabilistic exploration distributions. Designed as a plug-and-play module, LDE seamlessly integrates with mainstream algorithms including DQN, DDPG, and TD3. Evaluated on Atari and MuJoCo benchmarks, LDE achieves an average performance improvement of 37.27%. The implementation is fully open-sourced to ensure reproducibility.
📝 Abstract
Policy exploration is critical in reinforcement learning (RL), where existing approaches include greedy, Gaussian process, etc. However, these approaches utilize preset stochastic processes and are indiscriminately applied in all kinds of RL tasks without considering task-specific features that influence policy exploration. Moreover, during RL training, the evolution of such stochastic processes is rigid, which typically only incorporates a decay in the variance, failing to adjust flexibly according to the agent's real-time learning status. Inspired by the analyzing and reasoning capability of large language models (LLMs), we design LLM-Explorer to adaptively generate task-specific exploration strategies with LLMs, enhancing the policy exploration in RL. In our design, we sample the learning trajectory of the agent during the RL training in a given task and prompt the LLM to analyze the agent's current policy learning status and then generate a probability distribution for future policy exploration. Updating the probability distribution periodically, we derive a stochastic process specialized for the particular task and dynamically adjusted to adapt to the learning process. Our design is a plug-in module compatible with various widely applied RL algorithms, including the DQN series, DDPG, TD3, and any possible variants developed based on them. Through extensive experiments on the Atari and MuJoCo benchmarks, we demonstrate LLM-Explorer's capability to enhance RL policy exploration, achieving an average performance improvement up to 37.27%. Our code is open-source at https://anonymous.4open.science/r/LLM-Explorer-19BE for reproducibility.