On Measuring Unnoticeability of Graph Adversarial Attacks: Observations, New Measure, and Applications

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph adversarial attack detection methods face two key bottlenecks: hand-crafted detection rules are easily evaded by attackers, and global statistical features lack sensitivity to local, subtle perturbations—rendering attacks highly imperceptible. To address this, we propose HideNSeek, the first end-to-end learnable imperceptibility metric framework. Its core components are: (1) a Learnable Edge Scorer (LEO) for fine-grained identification of potentially adversarial edges; (2) an imbalance-aware aggregation mechanism that jointly models local anomalies and global structural consistency; and (3) graph-structural contrastive modeling to enable robustness evaluation and defense co-optimization. Extensive experiments across six real-world graphs and five attack types demonstrate that LEO consistently outperforms 11 baseline methods. Moreover, our learned edge filtering strategy significantly enhances the robustness of Graph Neural Networks (GNNs) under adversarial perturbations.

Technology Category

Application Category

📝 Abstract
Adversarial attacks are allegedly unnoticeable. Prior studies have designed attack noticeability measures on graphs, primarily using statistical tests to compare the topology of original and (possibly) attacked graphs. However, we observe two critical limitations in the existing measures. First, because the measures rely on simple rules, attackers can readily enhance their attacks to bypass them, reducing their attack"noticeability"and, yet, maintaining their attack performance. Second, because the measures naively leverage global statistics, such as degree distributions, they may entirely overlook attacks until severe perturbations occur, letting the attacks be almost"totally unnoticeable."To address the limitations, we introduce HideNSeek, a learnable measure for graph attack noticeability. First, to mitigate the bypass problem, HideNSeek learns to distinguish the original and (potential) attack edges using a learnable edge scorer (LEO), which scores each edge on its likelihood of being an attack. Second, to mitigate the overlooking problem, HideNSeek conducts imbalance-aware aggregation of all the edge scores to obtain the final noticeability score. Using six real-world graphs, we empirically demonstrate that HideNSeek effectively alleviates the observed limitations, and LEO (i.e., our learnable edge scorer) outperforms eleven competitors in distinguishing attack edges under five different attack methods. For an additional application, we show that LEO boost the performance of robust GNNs by removing attack-like edges.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Attacks
Graph Detection Methods
Statistical Features Dependence
Innovation

Methods, ideas, or system contributions that make the work stand out.

HideNSeek
Learnable Edge Scorer (LEO)
Imbalance-aware Aggregation
🔎 Similar Papers
No similar papers found.