🤖 AI Summary
This work addresses the challenge of inefficient exploration in high-dimensional continuous control tasks by introducing large language models (LLMs) to guide action-level exploration for the first time. By analyzing environmental states and visual replay trajectories, the LLM generates intervention signals that steer the Soft Actor-Critic (SAC) policy toward efficient, directed exploration. The proposed approach preserves SAC’s theoretical convergence guarantees while substantially improving sample efficiency and convergence speed. Experimental results on standard continuous control benchmarks such as MuJoCo demonstrate that the method consistently outperforms standard SAC as well as state-of-the-art exploration algorithms—including Random Network Distillation (RND), Intrinsic Curiosity Module (ICM), and Episodic Curiosity via Reachability (E3B)—in both final performance and sample efficiency.
📝 Abstract
We present GuidedSAC, a novel reinforcement learning (RL) algorithm that facilitates efficient exploration in vast state-action spaces. GuidedSAC leverages large language models (LLMs) as intelligent supervisors that provide action-level guidance for the Soft Actor-Critic (SAC) algorithm. The LLM-based supervisor analyzes the most recent trajectory using state information and visual replays, offering action-level interventions that enable targeted exploration. Furthermore, we provide a theoretical analysis of GuidedSAC, proving that it preserves the convergence guarantees of SAC while improving convergence speed. Through experiments in both discrete and continuous control environments, including toy text tasks and complex MuJoCo benchmarks, we demonstrate that GuidedSAC consistently outperforms standard SAC and state-of-the-art exploration-enhanced variants (e.g., RND, ICM, and E3B) in terms of sample efficiency and final performance.