π€ AI Summary
Unpredictable behavior of reinforcement learning (RL) agents in humanβrobot collaboration poses significant safety risks. Method: This paper proposes Predictability-Aware RL (PARL), a novel framework that explicitly models trajectory entropy rate as an optimizable component of the average reward, integrating it into the policy gradient framework for the first time. PARL introduces a learnable, model-based entropy-rate value function to enable controllable trade-offs between predictability and task performance. The method combines model-based entropy-rate estimation, entropy-rate value function modeling, policy gradient extension, and joint optimization of discounted task rewards and negative entropy rate. Results: Evaluated on human-like interaction tasks, PARL yields near-optimal policies with markedly enhanced predictability; third-party behavioral prediction accuracy improves significantly, demonstrating simultaneous enhancement of both safety and task performance.
π Abstract
In Reinforcement Learning (RL), agents have no incentive to exhibit predictable behaviors, and are often pushed (through e.g. policy entropy regularisation) to randomise their actions in favor of exploration. This often makes it challenging for other agents and humans to predict an agent's behavior, triggering unsafe scenarios (e.g. in human-robot interaction). We propose a novel method to induce predictable behavior in RL agents, termed Predictability-Aware RL (PARL), employing the agent's trajectory entropy rate to quantify predictability. Our method maximizes a linear combination of a standard discounted reward and the negative entropy rate, thus trading off optimality with predictability. We show how the entropy rate can be formally cast as an average reward, how entropy-rate value functions can be estimated from a learned model and incorporate this in policy-gradient algorithms, and demonstrate how this approach produces predictable (near-optimal) policies in tasks inspired by human-robot use-cases.