๐ค AI Summary
Industrial Control Systems (ICS) face significant security challenges due to the vulnerability of machine learningโbased Intrusion Detection Systems (IDS) to adversarial attacks and their poor generalization capability. To address this, this paper proposes an adversarial sample generation and robustness enhancement framework tailored for ICS environments. We adapt the Jacobian-based Saliency Map Attack (JSMA) to the ICS domain for the first time and generate high-generalizability adversarial examples on the SWaT testbed, empirically demonstrating cross-attack-type transferability and scalability against diverse real-world ICS attacks. Integrated within an adversarial training paradigm, our approach achieves 95% detection accuracy on previously unseen real attack data, substantially improving IDS robustness against stealthy adversarial perturbations. Key contributions include: (1) the first ICS-adapted JSMA formulation; (2) empirical validation of adversarial sample generalization in a realistic industrial control setting; and (3) a systematic methodology for enhancing model resilience against adversarial threats.
๐ Abstract
Machine learning (ML)-based intrusion detection systems (IDS) are vulnerable to adversarial attacks. It is crucial for an IDS to learn to recognize adversarial examples before malicious entities exploit them. In this paper, we generated adversarial samples using the Jacobian Saliency Map Attack (JSMA). We validate the generalization and scalability of the adversarial samples to tackle a broad range of real attacks on Industrial Control Systems (ICS). We evaluated the impact by assessing multiple attacks generated using the proposed method. The model trained with adversarial samples detected attacks with 95% accuracy on real-world attack data not used during training. The study was conducted using an operational secure water treatment (SWaT) testbed.