LLM-Explorer: A Plug-in Reinforcement Learning Policy Exploration Enhancement Driven by Large Language Models

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning exploration strategies suffer from insufficient task adaptability and dynamic responsiveness: predefined stochastic processes (e.g., ε-greedy, Gaussian noise) fail to model task-specific characteristics, while their evolution mechanisms—typically limited to fixed variance decay—are rigid and incapable of adapting to the agent’s real-time learning progress. To address this, we propose LLM-Driven Exploration (LDE), the first framework to integrate large language models into RL exploration. LDE employs prompt engineering to parse agent trajectories and autonomously generate, then periodically refine, task-specific, state-aware probabilistic exploration distributions. Designed as a plug-and-play module, LDE seamlessly integrates with mainstream algorithms including DQN, DDPG, and TD3. Evaluated on Atari and MuJoCo benchmarks, LDE achieves an average performance improvement of 37.27%. The implementation is fully open-sourced to ensure reproducibility.

Technology Category

Application Category

📝 Abstract
Policy exploration is critical in reinforcement learning (RL), where existing approaches include greedy, Gaussian process, etc. However, these approaches utilize preset stochastic processes and are indiscriminately applied in all kinds of RL tasks without considering task-specific features that influence policy exploration. Moreover, during RL training, the evolution of such stochastic processes is rigid, which typically only incorporates a decay in the variance, failing to adjust flexibly according to the agent's real-time learning status. Inspired by the analyzing and reasoning capability of large language models (LLMs), we design LLM-Explorer to adaptively generate task-specific exploration strategies with LLMs, enhancing the policy exploration in RL. In our design, we sample the learning trajectory of the agent during the RL training in a given task and prompt the LLM to analyze the agent's current policy learning status and then generate a probability distribution for future policy exploration. Updating the probability distribution periodically, we derive a stochastic process specialized for the particular task and dynamically adjusted to adapt to the learning process. Our design is a plug-in module compatible with various widely applied RL algorithms, including the DQN series, DDPG, TD3, and any possible variants developed based on them. Through extensive experiments on the Atari and MuJoCo benchmarks, we demonstrate LLM-Explorer's capability to enhance RL policy exploration, achieving an average performance improvement up to 37.27%. Our code is open-source at https://anonymous.4open.science/r/LLM-Explorer-19BE for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RL policy exploration with adaptive task-specific strategies
Overcoming rigid preset stochastic processes in RL exploration
Integrating LLMs to dynamically adjust exploration during RL training
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-Explorer enhances RL policy exploration adaptively
Generates task-specific exploration strategies using LLMs
Plug-in module compatible with various RL algorithms
🔎 Similar Papers
No similar papers found.
Qianyue Hao
Qianyue Hao
PhD Student, Department of Electronic Engineering, Tsinghua University
Reinforcement LearningLarge Language Models
Y
Yiwen Song
Department of Electronic Engineering, BNRist, Tsinghua University
Q
Qingmin Liao
Department of Electronic Engineering, BNRist, Tsinghua University
J
Jian Yuan
Department of Electronic Engineering, BNRist, Tsinghua University
Y
Yong Li
Department of Electronic Engineering, BNRist, Tsinghua University