🤖 AI Summary
This work proposes a lightweight reinforcement learning–based adversarial agent to address the vulnerability of machine learning–driven network intrusion detection systems (NIDS) to evasion attacks. Unlike conventional approaches that incur high computational overhead and deployment complexity, the proposed method employs offline training to learn perturbation strategies that effectively bypass NIDS without requiring online optimization. To the best of our knowledge, this is the first application of lightweight reinforcement learning to adversarial attacks against NIDS, supporting white-box, gray-box, and black-box threat models. Experimental results demonstrate a maximum attack success rate of 48.9%, with each perturbation generated in just 5.72 milliseconds and occupying only 0.52 MB of memory, thereby significantly enhancing the practicality and deployability of adversarial attacks in real-world scenarios.
📝 Abstract
Recent work on network attacks have demonstrated that ML-based network intrusion detection systems (NIDS) can be evaded with adversarial perturbations. However, these attacks rely on complex optimizations that have large computational overheads, making them impractical in many real-world settings. In this paper, we introduce a lightweight adversarial agent that implements strategies (policies) trained via reinforcement learning (RL) that learn to evade ML-based NIDS without requiring online optimization. This attack proceeds by (1) offline training, where the agent learns to evade a surrogate ML model by perturbing malicious flows using network traffic data assumed to be collected via reconnaissance, then (2) deployment, where the trained agent is used in a compromised device controlled by an attacker to evade ML-based NIDS using learned attack strategies. We evaluate our approach across diverse NIDS and several white-, gray-, and black-box threat models. We demonstrate that attacks using these lightweight agents can be highly effective (reaching up to 48.9% attack success rate), extremely fast (requiring as little as 5.72ms to craft an attack), and require negligible resources (e.g., 0.52MB of memory). Through this work, we demonstrate that future botnets driven by lightweight learning-based agents can be highly effective and widely deployable in diverse environments of compromised devices.