Enhancing Network Intrusion Detection Systems: A Multi-Layer Ensemble Approach to Mitigate Adversarial Attacks

📅 2025-10-05
🏛️ IEEE International Conference on Systems, Man and Cybernetics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the vulnerability of machine learning–based network intrusion detection systems (NIDS) to adversarial examples, which undermines their reliability. To mitigate this threat, the authors propose a two-layer defense architecture that integrates a stacked classifier with an autoencoder. The first layer performs initial detection using a stacked ensemble, while the second layer employs an autoencoder to verify benign traffic and filter out potential adversarial samples. The framework further incorporates adversarial training to enhance model robustness. Evaluations are conducted on the UNSW-NB15 and NSL-KDD datasets, using adversarial examples generated by GAN and FGSM attacks. Experimental results demonstrate that the proposed approach significantly improves both detection accuracy and robustness of NIDS in adversarial environments.

Technology Category

Application Category

📝 Abstract
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network Intrusion Detection Systems (NIDS), they can jeopardize network security. In this work, we aim to mitigate such risks by increasing the robustness of NIDS towards adversarial attacks. To that end, we explore two adversarial methods for generating malicious network traffic. The first method is based on Generative Adversarial Networks (GAN) and the second one is the Fast Gradient Sign Method (FGSM). The adversarial examples generated by these methods are then used to evaluate a novel multilayer defense mechanism, specifically designed to mitigate the vulnerability of ML-based NIDS. Our solution consists of one layer of stacking classifiers and a second layer based on an autoencoder. If the incoming network data are classified as benign by the first layer, the second layer is activated to ensure that the decision made by the stacking classifier is correct. We also incorporated adversarial training to further improve the robustness of our solution. Experiments on two datasets, namely UNSW-NB15 and NSL-KDD, demonstrate that the proposed approach increases resilience to adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Network Intrusion Detection Systems
Adversarial Attacks
Machine Learning
Adversarial Examples
Network Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-layer ensemble
adversarial robustness
stacking classifier
autoencoder
adversarial training
🔎 Similar Papers
No similar papers found.
Nasim Soltani
Nasim Soltani
The University of Texas at Austin
Wireless CommunicationsDeep LearningEdge Computing
S
Shayan Nejadshamsi
Concordia Institute for Information Systems Engineering (CIISE), Concordia University, Montréal, Québec, Canada
Zakaria Abou El Houda
Zakaria Abou El Houda
Professor, INRS, University of Quebec, Canada
Internet of Things (IoT)Network SecurityMachine LearningEdge IntelligenceBlockchain
R
Raphael Khoury
Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, Québec, Canada
K
Kelton A. P. Costa
Department of Computing, São Paulo State University, São Paulo, Brazil
Tiago H. Falk
Tiago H. Falk
Professor, INRS-EMT, University of Quebec, FIEEE
multimodal/sensory signal processingaffective computingcognitive computingcontext-awareness
A
Anderson R. Avila
Institut national de la recherche scientifique (INRS-EMT), Université du Québec, Montréal, Québec, Canada