🤖 AI Summary
This work investigates gray-box evasion attacks against graph neural network (GNN)-based anomaly detection systems in sensor networks under strict budget constraints: attackers can perturb only a small number of non-target nodes to either suppress true anomaly detection or induce false alarms at target nodes. We propose a novel adversarial perturbation generation method grounded in gradient sensitivity analysis and feature-space optimization, which identifies the most influential neighboring sensors for the target node’s classification. Perturbations are injected precisely within tight budgetary and stealthiness constraints. Our approach integrates multivariate time-series modeling with GNN interpretability analysis. Evaluated on three real-world datasets, it reduces the accuracy of state-of-the-art GNN-based detectors by 30.62%–39.16% on average—significantly outperforming existing baselines—and demonstrates both effectiveness and practical relevance.
📝 Abstract
Graph Neural Networks (GNNs) have emerged as powerful models for anomaly detection in sensor networks, particularly when analyzing multivariate time series. In this work, we introduce BETA, a novel grey-box evasion attack targeting such GNN-based detectors, where the attacker is constrained to perturb sensor readings from a limited set of nodes, excluding the target sensor, with the goal of either suppressing a true anomaly or triggering a false alarm at the target node. BETA identifies the sensors most influential to the target node's classification and injects carefully crafted adversarial perturbations into their features, all while maintaining stealth and respecting the attacker's budget. Experiments on three real-world sensor network datasets show that BETA reduces the detection accuracy of state-of-the-art GNN-based detectors by 30.62 to 39.16% on average, and significantly outperforms baseline attack strategies, while operating within realistic constraints.