A Novel and Practical Universal Adversarial Perturbations against Deep Reinforcement Learning based Intrusion Detection Systems

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes the vulnerability of deep reinforcement learning (DRL)-based intrusion detection systems (IDS) to universal adversarial perturbations (UAPs). Addressing the limitation of existing UAP methods—which ignore domain-specific constraints and feature correlations in network traffic—we propose the first customized UAP generation framework tailored for DRL-driven IDS. Our method enforces adherence to network protocol semantics and mathematical feature dependencies, and introduces a novel loss function based on Pearson correlation coefficients to jointly optimize perturbation stealthiness and evasion efficacy. Experiments demonstrate that the proposed Customized UAP significantly outperforms input-dependent attacks (e.g., FGSM, BIM) and four state-of-the-art UAP baselines, achieving higher attack success rates and superior cross-dataset transferability on real-world network traffic. This work establishes a new paradigm for rigorously evaluating and enhancing the security robustness of DRL-based cybersecurity systems.

Technology Category

Application Category

📝 Abstract
Intrusion Detection Systems (IDS) play a vital role in defending modern cyber physical systems against increasingly sophisticated cyber threats. Deep Reinforcement Learning-based IDS, have shown promise due to their adaptive and generalization capabilities. However, recent studies reveal their vulnerability to adversarial attacks, including Universal Adversarial Perturbations (UAPs), which can deceive models with a single, input-agnostic perturbation. In this work, we propose a novel UAP attack against Deep Reinforcement Learning (DRL)-based IDS under the domain-specific constraints derived from network data rules and feature relationships. To the best of our knowledge, there is no existing study that has explored UAP generation for the DRL-based IDS. In addition, this is the first work that focuses on developing a UAP against a DRL-based IDS under realistic domain constraints based on not only the basic domain rules but also mathematical relations between the features. Furthermore, we enhance the evasion performance of the proposed UAP, by introducing a customized loss function based on the Pearson Correlation Coefficient, and we denote it as Customized UAP. To the best of our knowledge, this is also the first work using the PCC value in the UAP generation, even in the broader context. Four additional established UAP baselines are implemented for a comprehensive comparison. Experimental results demonstrate that our proposed Customized UAP outperforms two input-dependent attacks including Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and four UAP baselines, highlighting its effectiveness for real-world adversarial scenarios.
Problem

Research questions and friction points this paper is trying to address.

Developing universal adversarial perturbations for deep reinforcement learning intrusion detection systems
Creating domain-specific UAPs under network data rules and feature relationship constraints
Enhancing evasion performance using Pearson Correlation Coefficient based loss function
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal adversarial perturbations for DRL-based intrusion detection
Domain constraints with network rules and feature relationships
Customized loss function using Pearson Correlation Coefficient
🔎 Similar Papers
No similar papers found.