Quantifying the Noise of Structural Perturbations on Graph Adversarial Attacks

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural network (GNN) adversarial attacks lack interpretable, quantitative characterization of individual perturbation strength, resulting in opaque, black-box attack processes. Method: This paper introduces the novel concept of “structural noise,” formally defining the adversarial edge’s attack strength as its perturbation effect on the node classification margin. Based on this, we propose a noise-driven, interpretable attack framework. Our method designs single-step and multi-step attack strategies grounded in the interplay between structural noise and classification margin, uncovering topological principles—namely, that high-centrality and low-homophily nodes are more susceptible to critical perturbations. Contribution/Results: Extensive experiments across multiple benchmark datasets and three mainstream GNN architectures demonstrate that our framework significantly improves attack effectiveness while providing theoretically grounded, interpretable guidance for perturbation selection. It offers a new perspective for analyzing GNN robustness and informs principled defense design.

Technology Category

Application Category

📝 Abstract
Graph neural networks have been widely utilized to solve graph-related tasks because of their strong learning power in utilizing the local information of neighbors. However, recent studies on graph adversarial attacks have proven that current graph neural networks are not robust against malicious attacks. Yet much of the existing work has focused on the optimization objective based on attack performance to obtain (near) optimal perturbations, but paid less attention to the strength quantification of each perturbation such as the injection of a particular node/link, which makes the choice of perturbations a black-box model that lacks interpretability. In this work, we propose the concept of noise to quantify the attack strength of each adversarial link. Furthermore, we propose three attack strategies based on the defined noise and classification margins in terms of single and multiple steps optimization. Extensive experiments conducted on benchmark datasets against three representative graph neural networks demonstrate the effectiveness of the proposed attack strategies. Particularly, we also investigate the preferred patterns of effective adversarial perturbations by analyzing the corresponding properties of the selected perturbation nodes.
Problem

Research questions and friction points this paper is trying to address.

Quantify attack strength of adversarial links on graphs
Develop interpretable perturbation strategies for graph attacks
Analyze preferred patterns of effective adversarial perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantify attack strength using noise concept
Develop single and multi-step attack strategies
Analyze perturbation patterns for interpretability
🔎 Similar Papers
No similar papers found.
Junyuan Fang
Junyuan Fang
Aalto University
H
Han Yang
School of Computer Science and Engineering, Sun Yat-sen University, China
H
Haixian Wen
School of Computer Science and Engineering, Sun Yat-sen University, China
Jiajing Wu
Jiajing Wu
Professor, Sun Yat-sen University
BlockchainComplex NetworksSoftware Engineering
Zibin Zheng
Zibin Zheng
IEEE Fellow, Highly Cited Researcher, Sun Yat-sen University, China
BlockchainSmart ContractServices ComputingSoftware Reliability
C
Chi K. Tse
Department of Electrical Engineering, City University of Hong Kong, China