Exploring the Effect of DNN Depth on Adversarial Attacks in Network Intrusion Detection Systems

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how network depth affects the adversarial robustness of deep neural networks (DNNs) in network intrusion detection systems (NIDS), revealing critical distinctions from computer vision. Method: We systematically construct DNNs with varying depths and evaluate them under standardized adversarial attacks—including PGD and FGSM—on both NIDS benchmarks (e.g., CIC-IDS2017) and image datasets (e.g., CIFAR-10). Contribution/Results: Contrary to prevailing assumptions, increasing depth degrades adversarial robustness in NIDS without improving detection accuracy, whereas comparable depth variations exert negligible effects in image classification. This work challenges the “deeper is stronger” heuristic and provides the first systematic evidence of a negative correlation between model depth and robustness in cybersecurity contexts. Our findings offer foundational insights for designing robust, depth-aware architectures tailored to NIDS, with direct implications for secure ML deployment in network security.

Technology Category

Application Category

📝 Abstract
Adversarial attacks pose significant challenges to Machine Learning (ML) systems and especially Deep Neural Networks (DNNs) by subtly manipulating inputs to induce incorrect predictions. This paper investigates whether increasing the layer depth of deep neural networks affects their robustness against adversarial attacks in the Network Intrusion Detection System (NIDS) domain. We compare the adversarial robustness of various deep neural networks across both ac{NIDS} and computer vision domains (the latter being widely used in adversarial attack experiments). Our experimental results reveal that in the NIDS domain, adding more layers does not necessarily improve their performance, yet it may actually significantly degrade their robustness against adversarial attacks. Conversely, in the computer vision domain, adding more layers exhibits a more modest impact on robustness. These findings can guide the development of robust neural networks for (NIDS) applications and highlight the unique characteristics of network security domains within the (ML) landscape.
Problem

Research questions and friction points this paper is trying to address.

Investigates DNN depth impact on adversarial attack robustness in NIDS
Compares adversarial robustness between NIDS and computer vision domains
Reveals deeper NIDS networks degrade robustness unlike computer vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating DNN depth impact on adversarial robustness
Comparing NIDS and computer vision domain behaviors
Finding deeper networks reduce NIDS attack resistance
🔎 Similar Papers
No similar papers found.
M
Mohamed elShehaby
Systems and Computer Engineering, Carleton University, Ottawa, Canada
Ashraf Matrawy
Ashraf Matrawy
Professor, Carleton University
ML in network SecurityIoT5G Security