Understanding the Impact of Graph Reduction on Adversarial Robustness in Graph Neural Networks

📅 2024-12-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates how graph reduction techniques—specifically sparsification and coarsening—affect the adversarial robustness of Graph Neural Networks (GNNs). Using benchmark datasets (e.g., Cora, Citeseer) and models (e.g., GCN, GAT), we evaluate four sparsification and six coarsening methods under MetaAttack and PGD poisoning attacks. We make the first observation that graph sparsification significantly mitigates MetaAttack (reducing success rate by up to 37%), yet offers no protection against PGD; in contrast, coarsening consistently degrades robustness, causing an average 22.6% accuracy drop at 50% reduction ratio. To explain these phenomena, we propose a novel attribution analysis framework revealing that defense failure arises from the synergistic effects of structural simplification and feature distortion. Our findings demonstrate that while graph reduction improves scalability, it may implicitly compromise security robustness—providing critical empirical evidence and theoretical grounding for security-aware graph compression design.

Technology Category

Application Category

📝 Abstract
As Graph Neural Networks (GNNs) become increasingly popular for learning from large-scale graph data across various domains, their susceptibility to adversarial attacks when using graph reduction techniques for scalability remains underexplored. In this paper, we present an extensive empirical study to investigate the impact of graph reduction techniques, specifically graph coarsening and sparsification, on the robustness of GNNs against adversarial attacks. Through extensive experiments involving multiple datasets and GNN architectures, we examine the effects of four sparsification and six coarsening methods on the poisoning attacks. Our results indicate that, while graph sparsification can mitigate the effectiveness of certain poisoning attacks, such as Mettack, it has limited impact on others, like PGD. Conversely, graph coarsening tends to amplify the adversarial impact, significantly reducing classification accuracy as the reduction ratio decreases. Additionally, we provide a novel analysis of the causes driving these effects and examine how defensive GNN models perform under graph reduction, offering practical insights for designing robust GNNs within graph acceleration systems.
Problem

Research questions and friction points this paper is trying to address.

Investigates GNN robustness under adversarial attacks with graph reduction
Analyzes impact of sparsification and coarsening on poisoning attack effectiveness
Explores defensive GNN performance in graph acceleration systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates graph reduction impact on GNN robustness
Tests sparsification and coarsening against poisoning attacks
Analyzes causes and offers robust GNN design insights
🔎 Similar Papers
No similar papers found.