Online Planning for Multi-UAV Pursuit-Evasion in Unknown Environments Using Deep Reinforcement Learning

📅 2024-09-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the multi-UAV cooperative pursuit-evasion problem in unknown 3D environments under dynamics constraints, partial observability, and real-world deployment challenges. We propose an end-to-end deep multi-agent reinforcement learning framework featuring: (1) a novel control policy parameterized jointly by collective thrust and body-frame angular rates; (2) a prediction-augmented neural network architecture to mitigate observation gaps; (3) an adaptive environment generator to enhance cross-scenario generalization; and (4) a two-stage reward refinement mechanism to improve cooperative learning. In simulation, our method achieves 100% capture success on unseen scenarios. Most notably, it demonstrates zero-shot transfer capability—successfully deployed on a physical quadrotor swarm without fine-tuning—outperforming all baseline methods in tracking accuracy, robustness, and convergence efficiency.

Technology Category

Application Category

📝 Abstract
Multi-UAV pursuit-evasion, where pursuers aim to capture evaders, poses a key challenge for UAV swarm intelligence. Multi-agent reinforcement learning (MARL) has demonstrated potential in modeling cooperative behaviors, but most RL-based approaches remain constrained to simplified simulations with limited dynamics or fixed scenarios. Previous attempts to deploy RL policy to real-world pursuit-evasion are largely restricted to two-dimensional scenarios, such as ground vehicles or UAVs at fixed altitudes. In this paper, we address multi-UAV pursuit-evasion by considering UAV dynamics and physical constraints. We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning. Additionally, we propose an adaptive environment generator within MARL training, enabling higher exploration efficiency and better policy generalization across diverse scenarios. Simulations show our method significantly outperforms all baselines in challenging scenarios, generalizing to unseen scenarios with a 100% capture rate. Finally, we derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner. To our knowledge, this is the first work to derive and deploy an RL-based policy using collective thrust and body rates control commands for multi-UAV pursuit-evasion in unknown environments. The open-source code and videos are available at https://sites.google.com/view/pursuit-evasion-rl.
Problem

Research questions and friction points this paper is trying to address.

Multi-UAV pursuit-evasion in unknown environments
Addressing partial observability in cooperative strategy learning
Deploying RL-based policy on real quadrotors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep reinforcement learning for multi-UAV pursuit-evasion
Evader prediction-enhanced network for partial observability
Adaptive environment generator for MARL training
🔎 Similar Papers
No similar papers found.
J
Jiayu Chen
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
C
Chao Yu
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
G
Guosheng Li
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
W
Wenhao Tang
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
X
Xinyi Yang
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
Botian Xu
Botian Xu
Tsinghua University
reinforcement learningrobotics
Huazhong Yang
Huazhong Yang
Professor of Electronics Engineering, Tsinghua University
VLSI circuits and systemsmachine intelligencewireless sensor networksbeyond-CMOS computing
Y
Yu Wang
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China