🤖 AI Summary
This work addresses the multi-UAV cooperative pursuit-evasion problem in unknown 3D environments under dynamics constraints, partial observability, and real-world deployment challenges. We propose an end-to-end deep multi-agent reinforcement learning framework featuring: (1) a novel control policy parameterized jointly by collective thrust and body-frame angular rates; (2) a prediction-augmented neural network architecture to mitigate observation gaps; (3) an adaptive environment generator to enhance cross-scenario generalization; and (4) a two-stage reward refinement mechanism to improve cooperative learning. In simulation, our method achieves 100% capture success on unseen scenarios. Most notably, it demonstrates zero-shot transfer capability—successfully deployed on a physical quadrotor swarm without fine-tuning—outperforming all baseline methods in tracking accuracy, robustness, and convergence efficiency.
📝 Abstract
Multi-UAV pursuit-evasion, where pursuers aim to capture evaders, poses a key challenge for UAV swarm intelligence. Multi-agent reinforcement learning (MARL) has demonstrated potential in modeling cooperative behaviors, but most RL-based approaches remain constrained to simplified simulations with limited dynamics or fixed scenarios. Previous attempts to deploy RL policy to real-world pursuit-evasion are largely restricted to two-dimensional scenarios, such as ground vehicles or UAVs at fixed altitudes. In this paper, we address multi-UAV pursuit-evasion by considering UAV dynamics and physical constraints. We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning. Additionally, we propose an adaptive environment generator within MARL training, enabling higher exploration efficiency and better policy generalization across diverse scenarios. Simulations show our method significantly outperforms all baselines in challenging scenarios, generalizing to unseen scenarios with a 100% capture rate. Finally, we derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner. To our knowledge, this is the first work to derive and deploy an RL-based policy using collective thrust and body rates control commands for multi-UAV pursuit-evasion in unknown environments. The open-source code and videos are available at https://sites.google.com/view/pursuit-evasion-rl.