Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations

πŸ“… 2025-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Graph neural networks (GNNs) are highly vulnerable during training to graph poisoning attacks involving arbitrary combinations of edge, node, and node-feature perturbations. Existing defenses either lack theoretical guarantees or are restricted to single perturbation types, specific architectures, or tasks, suffering from insufficient robustness certification accuracy. This paper proposes PGNNCertβ€”the first architecture-agnostic GNN defense framework supporting provably robust certification against arbitrary joint perturbations (edges/nodes/features) with 100% deterministic certification accuracy. Its core innovation lies in integrating deterministic interval propagation with robust aggregation pruning, grounded in graph-structure sensitivity analysis and bounded feature propagation theory. This yields a general, formally verifiable certification mechanism. Extensive evaluation on multiple node- and graph-classification benchmarks demonstrates that PGNNCert significantly outperforms state-of-the-art certification methods limited to single-perturbation settings.

Technology Category

Application Category

πŸ“ Abstract
Graph neural networks (GNNs) are becoming the de facto method to learn on the graph data and have achieved the state-of-the-art on node and graph classification tasks. However, recent works show GNNs are vulnerable to training-time poisoning attacks -- marginally perturbing edges, nodes, or/and node features of training graph(s) can largely degrade GNNs' testing performance. Most previous defenses against graph poisoning attacks are empirical and are soon broken by adaptive / stronger ones. A few provable defenses provide robustness guarantees, but have large gaps when applied in practice: 1) restrict the attacker on only one type of perturbation; 2) design for a particular GNN architecture or task; and 3) robustness guarantees are not 100% accurate. In this work, we bridge all these gaps by developing PGNNCert, the first certified defense of GNNs against poisoning attacks under arbitrary (edge, node, and node feature) perturbations with deterministic robustness guarantees. Extensive evaluations on multiple node and graph classification datasets and GNNs demonstrate the effectiveness of PGNNCert to provably defend against arbitrary poisoning perturbations. PGNNCert is also shown to significantly outperform the state-of-the-art certified defenses against edge perturbation or node perturbation during GNN training.
Problem

Research questions and friction points this paper is trying to address.

Certify GNN robustness against arbitrary poisoning attacks
Address gaps in existing provable defense methods
Defend against edge, node, and feature perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Certified defense against arbitrary graph perturbations
Deterministic robustness guarantees for GNNs
Outperforms state-of-the-art certified defenses
πŸ”Ž Similar Papers
No similar papers found.
Jiate Li
Jiate Li
University of Southern California
M
Meng Pang
School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
Y
Yun Dong
Department of Humanities, Social Science, and Communication, MSOE, Milwaukee, USA
Binghui Wang
Binghui Wang
Assistant Professor, Illinois Institute of Technology
Trustworthy Machine LearningMachine LearningData Science