A Defensive Framework Against Adversarial Attacks on Machine Learning-Based Network Intrusion Detection Systems

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of machine learning-based network intrusion detection systems (ML-based NIDS) against adversarial evasion attacks, this paper proposes an end-to-end robust defense framework integrating adversarial training, SMOTE-based class balancing, information gain-based feature selection, XGBoost-LSTM ensemble modeling, and multi-stage fine-tuning. This work is the first to synergistically combine these five techniques within the NIDS domain, significantly enhancing model generalization and stability against perturbed traffic. Experimental evaluation on NSL-KDD and UNSW-NB15 demonstrates an average 35% improvement in detection accuracy, a 12.5% reduction in false positive rate, and strong robustness across diverse adversarial attack scenarios. The proposed framework establishes a systematic, holistic defense paradigm for building trustworthy ML-based NIDS.

Technology Category

Application Category

📝 Abstract
As cyberattacks become increasingly sophisticated, advanced Network Intrusion Detection Systems (NIDS) are critical for modern network security. Traditional signature-based NIDS are inadequate against zero-day and evolving attacks. In response, machine learning (ML)-based NIDS have emerged as promising solutions; however, they are vulnerable to adversarial evasion attacks that subtly manipulate network traffic to bypass detection. To address this vulnerability, we propose a novel defensive framework that enhances the robustness of ML-based NIDS by simultaneously integrating adversarial training, dataset balancing techniques, advanced feature engineering, ensemble learning, and extensive model fine-tuning. We validate our framework using the NSL-KDD and UNSW-NB15 datasets. Experimental results show, on average, a 35% increase in detection accuracy and a 12.5% reduction in false positives compared to baseline models, particularly under adversarial conditions. The proposed defense against adversarial attacks significantly advances the practical deployment of robust ML-based NIDS in real-world networks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing ML-based NIDS robustness
Defending against adversarial evasion attacks
Improving detection accuracy and reducing false positives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances robustness
Ensemble learning improves detection accuracy
Feature engineering reduces false positives
🔎 Similar Papers
No similar papers found.
B
Benyamin Tafreshian
Department of Computer Science, Boston University, Boston, USA
Shengzhi Zhang
Shengzhi Zhang
Boston University MET College
AI SecurityVehicle SecurityIoT securitySystem Security