π€ AI Summary
In reinforcement learning, the difficulty of quantifying causal relationships impedes efficient exploration. Method: This paper proposes the βGoal Discovery and Causal Capacityβ framework, which formally defines and computes the *causal capacity* of state-action pairs to quantify the causal influence of agent actions on state transitions. Leveraging this metric, the framework automatically identifies high-causal-capacity states as semantically coherent subgoals, enabling goal-directed, targeted exploration. The approach integrates causal inference with Monte Carlo estimation, supporting both discrete and high-dimensional continuous state spaces, and seamlessly interfaces with mainstream RL algorithms. Contribution/Results: Extensive evaluation across multi-task benchmarks demonstrates that the discovered subgoals strongly align with human-defined priors, and the method achieves significantly higher task success rates compared to state-of-the-art baselines.
π Abstract
Causal inference is crucial for humans to explore the world, which can be modeled to enable an agent to efficiently explore the environment in reinforcement learning. Existing research indicates that establishing the causality between action and state transition will enhance an agent to reason how a policy affects its future trajectory, thereby promoting directed exploration. However, it is challenging to measure the causality due to its intractability in the vast state-action space of complex scenarios. In this paper, we propose a novel Goal Discovery with Causal Capacity (GDCC) framework for efficient environment exploration. Specifically, we first derive a measurement of causality in state space, emph{i.e.,} causal capacity, which represents the highest influence of an agent's behavior on future trajectories. After that, we present a Monte Carlo based method to identify critical points in discrete state space and further optimize this method for continuous high-dimensional environments. Those critical points are used to uncover where the agent makes important decisions in the environment, which are then regarded as our subgoals to guide the agent to make exploration more purposefully and efficiently. Empirical results from multi-objective tasks demonstrate that states with high causal capacity align with our expected subgoals, and our GDCC achieves significant success rate improvements compared to baselines.