🤖 AI Summary
In partially observable pursuit–evasion games on graphs, pursuers possess only incomplete information about the evader’s location, while the evader can anticipate and adapt to the pursuer’s strategy—posing significant challenges for computing real-time, worst-case robust policies.
Method: We propose the first real-time robust pursuit framework: (i) coupling dynamic programming with belief-state modeling to preserve optimality under asynchronous movement; and (ii) integrating graph neural networks with reinforcement learning to enable zero-shot cross-graph policy generalization.
Contribution/Results: Our approach learns robust policies that directly transfer to unseen real-world graph topologies without fine-tuning, consistently outperforming baseline methods trained from scratch on target graphs. Experiments demonstrate strong generalization and robustness under partial observability and adversarial evader behavior, establishing a new state of the art in real-time robust pursuit.
📝 Abstract
Computing worst-case robust strategies in pursuit-evasion games (PEGs) is time-consuming, especially when real-world factors like partial observability are considered. While important for general security purposes, real-time applicable pursuit strategies for graph-based PEGs are currently missing when the pursuers only have imperfect information about the evader's position. Although state-of-the-art reinforcement learning (RL) methods like Equilibrium Policy Generalization (EPG) and Grasper provide guidelines for learning graph neural network (GNN) policies robust to different game dynamics, they are restricted to the scenario of perfect information and do not take into account the possible case where the evader can predict the pursuers' actions. This paper introduces the first approach to worst-case robust real-time pursuit strategies (R2PS) under partial observability. We first prove that a traditional dynamic programming (DP) algorithm for solving Markov PEGs maintains optimality under the asynchronous moves by the evader. Then, we propose a belief preservation mechanism about the evader's possible positions, extending the DP pursuit strategies to a partially observable setting. Finally, we embed the belief preservation into the state-of-the-art EPG framework to finish our R2PS learning scheme, which leads to a real-time pursuer policy through cross-graph reinforcement learning against the asynchronous-move DP evasion strategies. After reinforcement learning, our policy achieves robust zero-shot generalization to unseen real-world graph structures and consistently outperforms the policy directly trained on the test graphs by the existing game RL approach.