Goal-Guided Efficient Exploration via Large Language Model in Reinforcement Learning

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low exploration efficiency and poor long-horizon planning in RL agents operating in complex, open-ended environments—as well as the high invocation cost and semantic mismatch inherent in existing LLM-augmented methods—this paper proposes a structured goal-guided reinforcement learning framework. Our method integrates three key components: (1) a structured goal planner that leverages an LLM to generate reusable, hierarchical goal functions in a single inference step and ranks them via learned weights; (2) a goal-conditioned action pruner that enables efficient exploration through dynamic forward goal guidance and goal-aware action masking; and (3) drastic reduction in LLM invocation frequency, thereby mitigating semantic drift. Evaluated on the Crafter and Craftax-Classic benchmarks, our approach surpasses state-of-the-art methods, demonstrating synergistic improvements in both exploration efficiency and long-term decision-making performance.

Technology Category

Application Category

📝 Abstract
Real-world decision-making tasks typically occur in complex and open environments, posing significant challenges to reinforcement learning (RL) agents' exploration efficiency and long-horizon planning capabilities. A promising approach is LLM-enhanced RL, which leverages the rich prior knowledge and strong planning capabilities of LLMs to guide RL agents in efficient exploration. However, existing methods mostly rely on frequent and costly LLM invocations and suffer from limited performance due to the semantic mismatch. In this paper, we introduce a Structured Goal-guided Reinforcement Learning (SGRL) method that integrates a structured goal planner and a goal-conditioned action pruner to guide RL agents toward efficient exploration. Specifically, the structured goal planner utilizes LLMs to generate a reusable, structured function for goal generation, in which goals are prioritized. Furthermore, by utilizing LLMs to determine goals' priority weights, it dynamically generates forward-looking goals to guide the agent's policy toward more promising decision-making trajectories. The goal-conditioned action pruner employs an action masking mechanism that filters out actions misaligned with the current goal, thereby constraining the RL agent to select goal-consistent policies. We evaluate the proposed method on Crafter and Craftax-Classic, and experimental results demonstrate that SGRL achieves superior performance compared to existing state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RL exploration efficiency in complex environments
Reducing costly LLM invocations through structured goal planning
Aligning agent actions with prioritized goals via pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured goal planner generates prioritized reusable goals
Goal-conditioned action pruner filters misaligned actions
LLM determines priority weights for forward-looking goals
🔎 Similar Papers
No similar papers found.
Y
Yajie Qi
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China
W
Wei Wei
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China
L
Lin Li
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China
L
Lijun Zhang
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China
Z
Zhidong Gao
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China
Da Wang
Da Wang
Ph.D in Electrical Engineering, MIT
Information TheoryDistributed Computing
H
Huizhong Song
School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China