🤖 AI Summary
To address low exploration efficiency and poor long-horizon planning in RL agents operating in complex, open-ended environments—as well as the high invocation cost and semantic mismatch inherent in existing LLM-augmented methods—this paper proposes a structured goal-guided reinforcement learning framework. Our method integrates three key components: (1) a structured goal planner that leverages an LLM to generate reusable, hierarchical goal functions in a single inference step and ranks them via learned weights; (2) a goal-conditioned action pruner that enables efficient exploration through dynamic forward goal guidance and goal-aware action masking; and (3) drastic reduction in LLM invocation frequency, thereby mitigating semantic drift. Evaluated on the Crafter and Craftax-Classic benchmarks, our approach surpasses state-of-the-art methods, demonstrating synergistic improvements in both exploration efficiency and long-term decision-making performance.
📝 Abstract
Real-world decision-making tasks typically occur in complex and open environments, posing significant challenges to reinforcement learning (RL) agents' exploration efficiency and long-horizon planning capabilities. A promising approach is LLM-enhanced RL, which leverages the rich prior knowledge and strong planning capabilities of LLMs to guide RL agents in efficient exploration. However, existing methods mostly rely on frequent and costly LLM invocations and suffer from limited performance due to the semantic mismatch. In this paper, we introduce a Structured Goal-guided Reinforcement Learning (SGRL) method that integrates a structured goal planner and a goal-conditioned action pruner to guide RL agents toward efficient exploration. Specifically, the structured goal planner utilizes LLMs to generate a reusable, structured function for goal generation, in which goals are prioritized. Furthermore, by utilizing LLMs to determine goals' priority weights, it dynamically generates forward-looking goals to guide the agent's policy toward more promising decision-making trajectories. The goal-conditioned action pruner employs an action masking mechanism that filters out actions misaligned with the current goal, thereby constraining the RL agent to select goal-consistent policies. We evaluate the proposed method on Crafter and Craftax-Classic, and experimental results demonstrate that SGRL achieves superior performance compared to existing state-of-the-art methods.