Communication-Free Collective Navigation for a Swarm of UAVs via LiDAR-Based Deep Reinforcement Learning

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an implicit leader-follower cooperative navigation framework tailored for scenarios without inter-agent communication, dense obstacles, and external positioning. Only the leader drone is aware of the global goal, while the followers rely solely on onboard LiDAR for local perception and employ a deep reinforcement learning policy to navigate collectively—without explicit communication or identification of the leader. The approach integrates LiDAR point cloud clustering with an extended Kalman filter to robustly track neighboring agents, enabling emergent obstacle avoidance and formation behaviors based purely on local observations. Extensive evaluations in NVIDIA Isaac Sim simulations and real-world experiments with a fleet of five drones demonstrate robust collective navigation in complex indoor and outdoor environments, validating both the efficacy of the method and its successful sim-to-real transfer.

Technology Category

Application Category

📝 Abstract
This paper presents a deep reinforcement learning (DRL) based controller for collective navigation of unmanned aerial vehicle (UAV) swarms in communication-denied environments, enabling robust operation in complex, obstacle-rich environments. Inspired by biological swarms where informed individuals guide groups without explicit communication, we employ an implicit leader-follower framework. In this paradigm, only the leader possesses goal information, while follower UAVs learn robust policies using only onboard LiDAR sensing, without requiring any inter-agent communication or leader identification. Our system utilizes LiDAR point clustering and an extended Kalman filter for stable neighbor tracking, providing reliable perception independent of external positioning systems. The core of our approach is a DRL controller, trained in GPU-accelerated Nvidia Isaac Sim, that enables followers to learn complex emergent behaviors - balancing flocking and obstacle avoidance - using only local perception. This allows the swarm to implicitly follow the leader while robustly addressing perceptual challenges such as occlusion and limited field-of-view. The robustness and sim-to-real transfer of our approach are confirmed through extensive simulations and challenging real-world experiments with a swarm of five UAVs, which successfully demonstrated collective navigation across diverse indoor and outdoor environments without any communication or external localization.
Problem

Research questions and friction points this paper is trying to address.

communication-free
collective navigation
UAV swarm
LiDAR-based perception
obstacle-rich environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

communication-free swarm navigation
LiDAR-based deep reinforcement learning
implicit leader-follower
sim-to-real transfer
onboard perception
🔎 Similar Papers
No similar papers found.
M
Myong-Yol Choi
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
H
Hankyoul Ko
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
H
Hanse Cho
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
C
Changseung Kim
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
Seunghwan Kim
Seunghwan Kim
Seoul National University
Jaemin Seo
Jaemin Seo
UNIST
Information-theoretic Active perception Multi-agent Cooperation
Hyondong Oh
Hyondong Oh
Associate Professor of KAIST (Korea Advanced Institute of Science and Technology)
Autonomous VehiclesCooperative ControlUAVGuidance and ControlEstimation