🤖 AI Summary
Real-time inference of opponents’ goals and policies in non-cooperative multi-agent settings remains challenging, as existing deep inverse reinforcement learning (IRL) methods are predominantly offline, require large-scale datasets, and rely on first-order gradient optimization—rendering them unsuitable for time-critical adversarial interactions.
Method: We propose the first recursive online IRL framework grounded in the maximum-entropy principle. It employs a second-order Newton method to tightly approximate the upper bound of the Guided Cost Learning objective and integrates ideas from the extended Kalman filter to enable rapid, robust parameter updates.
Contribution/Results: Our method enables online cost function refinement from a single-step observation, drastically improving convergence speed and adaptability. Evaluated on standard and adversarial benchmark tasks, it achieves superior cost function recovery accuracy and policy generalization compared to state-of-the-art IRL algorithms.
📝 Abstract
Inferring an adversary's goals from exhibited behavior is crucial for counterplanning and non-cooperative multi-agent systems in domains like cybersecurity, military, and strategy games. Deep Inverse Reinforcement Learning (IRL) methods based on maximum entropy principles show promise in recovering adversaries' goals but are typically offline, require large batch sizes with gradient descent, and rely on first-order updates, limiting their applicability in real-time scenarios. We propose an online Recursive Deep Inverse Reinforcement Learning (RDIRL) approach to recover the cost function governing the adversary actions and goals. Specifically, we minimize an upper bound on the standard Guided Cost Learning (GCL) objective using sequential second-order Newton updates, akin to the Extended Kalman Filter (EKF), leading to a fast (in terms of convergence) learning algorithm. We demonstrate that RDIRL is able to recover cost and reward functions of expert agents in standard and adversarial benchmark tasks. Experiments on benchmark tasks show that our proposed approach outperforms several leading IRL algorithms.