π€ AI Summary
Existing explainable reinforcement learning (XRL) methods predominantly provide step-wise local explanations and lack mechanisms to credibly assess agentsβ long-term behavioral trajectories. To address this, we propose a trajectory-level interpretability framework. Our method introduces a novel state importance metric that jointly incorporates Q-value differences and goal-directedness, enabling trajectory ranking and counterfactual rollout reasoning to answer βWhy this trajectory?β. The approach comprises three stages: importance modeling, trajectory aggregation analysis, and counterfactual generation. Experiments on OpenAI Gym benchmarks demonstrate that our framework more accurately identifies optimal trajectories; moreover, the selected trajectories exhibit significantly superior performance and robustness compared to alternatives. By grounding explanations in verifiable, goal-aware trajectory semantics, our method provides interpretable and trustworthy support for long-horizon RL decision-making.
π Abstract
As Reinforcement Learning (RL) agents are increasingly deployed in real-world applications, ensuring their behavior is transparent and trustworthy is paramount. A key component of trust is explainability, yet much of the work in Explainable RL (XRL) focuses on local, single-step decisions. This paper addresses the critical need for explaining an agent's long-term behavior through trajectory-level analysis. We introduce a novel framework that ranks entire trajectories by defining and aggregating a new state-importance metric. This metric combines the classic Q-value difference with a "radical term" that captures the agent's affinity to reach its goal, providing a more nuanced measure of state criticality. We demonstrate that our method successfully identifies optimal trajectories from a heterogeneous collection of agent experiences. Furthermore, by generating counterfactual rollouts from critical states within these trajectories, we show that the agent's chosen path is robustly superior to alternatives, thereby providing a powerful "Why this, and not that?" explanation. Our experiments in standard OpenAI Gym environments validate that our proposed importance metric is more effective at identifying optimal behaviors compared to classic approaches, offering a significant step towards trustworthy autonomous systems.