🤖 AI Summary
Deep learning–based intrusion detection systems (IDS) in cyber-physical systems (CPS) and IoT environments are vulnerable to backdoor attacks, yet existing defenses struggle to detect such threats. To address this, we propose PCAP-Backdoor—the first lightweight backdoor poisoning method tailored to real-world PCAP network traffic. Our approach introduces protocol-aware packet-level feature perturbation and gradient-controllable training injection, enabling highly stealthy trigger embedding with ≤1% data contamination. Crucially, we uncover a novel attack paradigm: injecting triggers into benign traffic can cause misclassification of malicious traffic—a previously unreported phenomenon. Evaluated across multiple CPS/IoT datasets, PCAP-Backdoor achieves >95% attack success rates while effectively evading state-of-the-art backdoor defenses. These results demonstrate its practicality, stealthiness, and significant threat to deployed IDS in resource-constrained CPS/IoT settings.
📝 Abstract
The rapid expansion of connected devices has made them prime targets for cyberattacks. To address these threats, deep learning-based, data-driven intrusion detection systems (IDS) have emerged as powerful tools for detecting and mitigating such attacks. These IDSs analyze network traffic to identify unusual patterns and anomalies that may indicate potential security breaches. However, prior research has shown that deep learning models are vulnerable to backdoor attacks, where attackers inject triggers into the model to manipulate its behavior and cause misclassifications of network traffic. In this paper, we explore the susceptibility of deep learning-based IDS systems to backdoor attacks in the context of network traffic analysis. We introduce exttt{PCAP-Backdoor}, a novel technique that facilitates backdoor poisoning attacks on PCAP datasets. Our experiments on real-world Cyber-Physical Systems (CPS) and Internet of Things (IoT) network traffic datasets demonstrate that attackers can effectively backdoor a model by poisoning as little as 1% or less of the entire training dataset. Moreover, we show that an attacker can introduce a trigger into benign traffic during model training yet cause the backdoored model to misclassify malicious traffic when the trigger is present. Finally, we highlight the difficulty of detecting this trigger-based backdoor, even when using existing backdoor defense techniques.