๐ค AI Summary
This work addresses the challenge that reinforcement learning agents often exhibit unintended behaviors due to imperfect reward specifications, and existing explainable reinforcement learning (XRL) methods struggle to capture cross-episode behavioral patterns. To bridge this gap, the paper proposes Behavior-based Explainable Reinforcement Learning (BXRL), a novel framework that formalizes โbehaviorโ as a first-class object. BXRL introduces behavior metric functions to quantify user-specified action patterns and employs a differentiable contrastive mechanism to explain the underlying causes of observed behaviors. The authors reimplement the HighwayEnv environment in JAX to support behavior definition, metric computation, and gradient propagation, and demonstrate compatibility with three existing XRL methods for behavior-level interpretation. This framework provides both a theoretical foundation and an experimental platform for understanding agentsโ long-term strategic decision-making.
๐ Abstract
A major challenge of Reinforcement Learning is that agents often learn undesired behaviors that seem to defy the reward structure they were given. Explainable Reinforcement Learning (XRL) methods can answer queries such as "explain this specific action", "explain this specific trajectory", and "explain the entire policy". However, XRL lacks a formal definition for behavior as a pattern of actions across many episodes. We provide such a definition, and use it to enable a new query: "Explain this behavior".
We present Behavior-Explainable Reinforcement Learning (BXRL), a new problem formulation that treats behaviors as first-class objects. BXRL defines a behavior measure as any function $m : ฮ \to \mathbb{R}$, allowing users to precisely express the pattern of actions that they find interesting and measure how strongly the policy exhibits it. We define contrastive behaviors that reduce the question "why does the agent prefer $a$ to $a'$?" to "why is $m(ฯ)$ high?" which can be explored with differentiation. We do not implement an explainability method; we instead analyze three existing methods and propose how they could be adapted to explain behavior. We present a port of the HighwayEnv driving environment to JAX, which provides an interface for defining, measuring, and differentiating behaviors with respect to the model parameters.