🤖 AI Summary
To address the vulnerability of AI-driven intrusion detection systems (IDS) in IoT environments to adversarial attacks, this paper proposes a novel defense model based on SHAP-based attribution fingerprints. The method introduces, for the first time, fine-grained feature attribution values—generated via SHAP (specifically DeepExplainer)—as “attribution fingerprints” that capture intrinsic differences in feature contribution patterns between benign and adversarial network traffic, enabling high-accuracy adversarial sample identification. By jointly optimizing robustness and interpretability, the model achieves superior performance on standard IoT benchmark datasets compared to state-of-the-art approaches: it significantly improves adversarial detection rates while enhancing decision transparency and security trustworthiness. The attribution fingerprint framework provides both actionable insights into model behavior and a principled, explainable mechanism for adversarial resilience in resource-constrained IoT settings.
📝 Abstract
The rapid proliferation of Internet of Things (IoT) devices has transformed numerous industries by enabling seamless connectivity and data-driven automation. However, this expansion has also exposed IoT networks to increasingly sophisticated security threats, including adversarial attacks targeting artificial intelligence (AI) and machine learning (ML)-based intrusion detection systems (IDS) to deliberately evade detection, induce misclassification, and systematically undermine the reliability and integrity of security defenses. To address these challenges, we propose a novel adversarial detection model that enhances the robustness of IoT IDS against adversarial attacks through SHapley Additive exPlanations (SHAP)-based fingerprinting. Using SHAP's DeepExplainer, we extract attribution fingerprints from network traffic features, enabling the IDS to reliably distinguish between clean and adversarially perturbed inputs. By capturing subtle attribution patterns, the model becomes more resilient to evasion attempts and adversarial manipulations. We evaluated the model on a standard IoT benchmark dataset, where it significantly outperformed a state-of-the-art method in detecting adversarial attacks. In addition to enhanced robustness, this approach improves model transparency and interpretability, thereby increasing trust in the IDS through explainable AI.