Generative Adversarial Evasion and Out-of-Distribution Detection for UAV Cyber-Attacks

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In UAV airspace, distinguishing covert adversarial attacks from out-of-distribution (OOD) events remains challenging, rendering conventional anomaly detection methods ineffective. Method: This paper proposes a novel intrusion detection framework integrating generative adversarial learning with OOD detection. Specifically, it employs a conditional Generative Adversarial Network (cGAN) to synthesize highly stealthy adversarial samples that expose vulnerabilities of existing detectors; concurrently, it leverages a conditional Variational Autoencoder (CVAE) to model the distribution of legitimate traffic and introduces a negative log-likelihood (NLL)-based confidence score for fine-grained discrimination between adversarial samples and genuine OOD events. Results: Experimental evaluation demonstrates that the proposed CVAE-based approach significantly outperforms baseline methods—including Mahalanobis distance—in adversarial attack identification accuracy, thereby enhancing robustness against generative-model-driven attacks in UAV networks.

Technology Category

Application Category

📝 Abstract
The growing integration of UAVs into civilian airspace underscores the need for resilient and intelligent intrusion detection systems (IDS), as traditional anomaly detection methods often fail to identify novel threats. A common approach treats unfamiliar attacks as out-of-distribution (OOD) samples; however, this leaves systems vulnerable when mitigation is inadequate. Moreover, conventional OOD detectors struggle to distinguish stealthy adversarial attacks from genuine OOD events. This paper introduces a conditional generative adversarial network (cGAN)-based framework for crafting stealthy adversarial attacks that evade IDS mechanisms. We first design a robust multi-class IDS classifier trained on benign UAV telemetry and known cyber-attacks, including Denial of Service (DoS), false data injection (FDI), man-in-the-middle (MiTM), and replay attacks. Using this classifier, our cGAN perturbs known attacks to generate adversarial samples that misclassify as benign while retaining statistical resemblance to OOD distributions. These adversarial samples are iteratively refined to achieve high stealth and success rates. To detect such perturbations, we implement a conditional variational autoencoder (CVAE), leveraging negative log-likelihood to separate adversarial inputs from authentic OOD samples. Comparative evaluation shows that CVAE-based regret scores significantly outperform traditional Mahalanobis distance-based detectors in identifying stealthy adversarial threats. Our findings emphasize the importance of advanced probabilistic modeling to strengthen IDS capabilities against adaptive, generative-model-based cyber intrusions.
Problem

Research questions and friction points this paper is trying to address.

Detect novel UAV cyber-attacks missed by traditional methods
Distinguish stealthy adversarial attacks from genuine OOD events
Improve IDS resilience against generative-model-based intrusions
Innovation

Methods, ideas, or system contributions that make the work stand out.

cGAN-based framework for stealthy adversarial attacks
Robust multi-class IDS classifier for known attacks
CVAE with negative log-likelihood for adversarial detection
🔎 Similar Papers
No similar papers found.
D
Deepak Kumar Panda
Faculty of Engineering and Applied Sciences, Cranfield University, MK43 0AL Cranfield, U.K
Weisi Guo
Weisi Guo
Professor & Head of Centre - Cranfield University; Visiting Fellow - Alan Turing Inst.
Graph Signal ProcessingNetworksAdversarial AIAutonomySocial Physics