🤖 AI Summary
This study addresses the vulnerability of machine learning–based network intrusion detection systems (NIDS) to adversarial examples, which undermines their reliability. To mitigate this threat, the authors propose a two-layer defense architecture that integrates a stacked classifier with an autoencoder. The first layer performs initial detection using a stacked ensemble, while the second layer employs an autoencoder to verify benign traffic and filter out potential adversarial samples. The framework further incorporates adversarial training to enhance model robustness. Evaluations are conducted on the UNSW-NB15 and NSL-KDD datasets, using adversarial examples generated by GAN and FGSM attacks. Experimental results demonstrate that the proposed approach significantly improves both detection accuracy and robustness of NIDS in adversarial environments.
📝 Abstract
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network Intrusion Detection Systems (NIDS), they can jeopardize network security. In this work, we aim to mitigate such risks by increasing the robustness of NIDS towards adversarial attacks. To that end, we explore two adversarial methods for generating malicious network traffic. The first method is based on Generative Adversarial Networks (GAN) and the second one is the Fast Gradient Sign Method (FGSM). The adversarial examples generated by these methods are then used to evaluate a novel multilayer defense mechanism, specifically designed to mitigate the vulnerability of ML-based NIDS. Our solution consists of one layer of stacking classifiers and a second layer based on an autoencoder. If the incoming network data are classified as benign by the first layer, the second layer is activated to ensure that the decision made by the stacking classifier is correct. We also incorporated adversarial training to further improve the robustness of our solution. Experiments on two datasets, namely UNSW-NB15 and NSL-KDD, demonstrate that the proposed approach increases resilience to adversarial attacks.