🤖 AI Summary
To address the vulnerability of UAV swarm networks to denial-of-service (DoS) attacks under open wireless environments, dynamic topologies, and resource constraints, this paper proposes a federated multi-agent deep reinforcement learning (FMADRL)-driven dynamic moving target defense (MTD) framework. We pioneer the tight integration of FMADRL with MTD, modeling the swarm as a partially observable Markov decision process (POMDP) and designing a privacy-preserving reward-weighted aggregation mechanism to enable lightweight collaborative defense—including leader-node switching, path mutation, and frequency hopping. Experimental results demonstrate that the framework achieves a 34.6% improvement in attack mitigation rate, reduces average recovery time by 94.6%, and lowers energy consumption and defense overhead by 29.3% and 98.3%, respectively. These gains significantly enhance mission continuity under diverse DoS attack scenarios.
📝 Abstract
The proliferation of unmanned aerial vehicle (UAV) swarms has enabled a wide range of mission-critical applications, but also exposes UAV networks to severe Denial-of-Service (DoS) threats due to their open wireless environment, dynamic topology, and resource constraints. Traditional static or centralized defense mechanisms are often inadequate for such dynamic and distributed scenarios. To address these challenges, we propose a novel federated multi-agent deep reinforcement learning (FMADRL)-driven moving target defense (MTD) framework for proactive and adaptive DoS mitigation in UAV swarm networks. Specifically, we design three lightweight and coordinated MTD mechanisms, including leader switching, route mutation, and frequency hopping, that leverage the inherent flexibility of UAV swarms to disrupt attacker efforts and enhance network resilience. The defense problem is formulated as a multi-agent partially observable Markov decision process (POMDP), capturing the distributed, resource-constrained, and uncertain nature of UAV swarms under attack. Each UAV is equipped with a local policy agent that autonomously selects MTD actions based on partial observations and local experiences. By employing a policy gradient-based FMADRL algorithm, UAVs collaboratively optimize their defense policies via reward-weighted aggregation, enabling distributed learning without sharing raw data and thus reducing communication overhead. Extensive simulations demonstrate that our approach significantly outperforms state-of-the-art baselines, achieving up to a 34.6% improvement in attack mitigation rate, a reduction in average recovery time of up to 94.6%, and decreases in energy consumption and defense cost by as much as 29.3% and 98.3%, respectively, while maintaining robust mission continuity under various DoS attack strategies.