🤖 AI Summary
In UAV airspace, distinguishing covert adversarial attacks from out-of-distribution (OOD) events remains challenging, rendering conventional anomaly detection methods ineffective. Method: This paper proposes a novel intrusion detection framework integrating generative adversarial learning with OOD detection. Specifically, it employs a conditional Generative Adversarial Network (cGAN) to synthesize highly stealthy adversarial samples that expose vulnerabilities of existing detectors; concurrently, it leverages a conditional Variational Autoencoder (CVAE) to model the distribution of legitimate traffic and introduces a negative log-likelihood (NLL)-based confidence score for fine-grained discrimination between adversarial samples and genuine OOD events. Results: Experimental evaluation demonstrates that the proposed CVAE-based approach significantly outperforms baseline methods—including Mahalanobis distance—in adversarial attack identification accuracy, thereby enhancing robustness against generative-model-driven attacks in UAV networks.
📝 Abstract
The growing integration of UAVs into civilian airspace underscores the need for resilient and intelligent intrusion detection systems (IDS), as traditional anomaly detection methods often fail to identify novel threats. A common approach treats unfamiliar attacks as out-of-distribution (OOD) samples; however, this leaves systems vulnerable when mitigation is inadequate. Moreover, conventional OOD detectors struggle to distinguish stealthy adversarial attacks from genuine OOD events. This paper introduces a conditional generative adversarial network (cGAN)-based framework for crafting stealthy adversarial attacks that evade IDS mechanisms. We first design a robust multi-class IDS classifier trained on benign UAV telemetry and known cyber-attacks, including Denial of Service (DoS), false data injection (FDI), man-in-the-middle (MiTM), and replay attacks. Using this classifier, our cGAN perturbs known attacks to generate adversarial samples that misclassify as benign while retaining statistical resemblance to OOD distributions. These adversarial samples are iteratively refined to achieve high stealth and success rates. To detect such perturbations, we implement a conditional variational autoencoder (CVAE), leveraging negative log-likelihood to separate adversarial inputs from authentic OOD samples. Comparative evaluation shows that CVAE-based regret scores significantly outperform traditional Mahalanobis distance-based detectors in identifying stealthy adversarial threats. Our findings emphasize the importance of advanced probabilistic modeling to strengthen IDS capabilities against adaptive, generative-model-based cyber intrusions.