Explainable and Resilient ML-Based Physical-Layer Attack Detectors

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak interpretability and insufficient robustness of physical-layer attack detectors, this paper proposes an explainability-robustness co-analysis framework tailored for wireless physical-layer security. Methodologically, it integrates feature importance analysis with adversarial parameter noise injection to systematically evaluate detection mechanisms and performance degradation patterns of diverse machine learning classifiers under varying monitoring parameters, and— for the first time—quantifies the trade-off between inference speed and perturbation resilience. Contributions include: (1) an interpretability-driven model optimization paradigm that uncovers critical discriminative features and attack response pathways; (2) a parameter-level robustness benchmark identifying noise-sensitive dimensions; and (3) design principles for detectors balancing real-time operation and reliability. Experiments demonstrate that the proposed method maintains millisecond-scale detection latency while reducing AUC degradation under adversarial perturbations by 42%, significantly enhancing deployment trustworthiness.

Technology Category

Application Category

📝 Abstract
Detection of emerging attacks on network infrastructure is a critical aspect of security management. To meet the growing scale and complexity of modern threats, machine learning (ML) techniques offer valuable tools for automating the detection of malicious activities. However, as these techniques become more complex, their internal operations grow increasingly opaque. In this context, we address the need for explainable physical-layer attack detection methods. First, we analyze the inner workings of various classifiers trained to alert about physical layer intrusions, examining how the influence of different monitored parameters varies depending on the type of attack being detected. This analysis not only improves the interpretability of the models but also suggests ways to enhance their design for increased speed. In the second part, we evaluate the detectors' resilience to malicious parameter noising. The results highlight a key trade-off between model speed and resilience. This work serves as a design guideline for developing fast and robust detectors trained on available network monitoring data.
Problem

Research questions and friction points this paper is trying to address.

Developing explainable machine learning detectors for physical-layer network attacks
Analyzing classifier interpretability and resilience against malicious parameter manipulation
Balancing detection speed with robustness in network intrusion detection systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable machine learning for physical-layer attack detection
Analyzing classifier inner workings to improve interpretability
Evaluating detector resilience to malicious parameter noising
🔎 Similar Papers
No similar papers found.
A
Aleksandra Knapińska
Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
Marija Furdek
Marija Furdek
Chalmers University of Technology, Sweden
Optical NetworksSecurityOptimizationMachine Learning