π€ AI Summary
Graph neural networks (GNNs) are highly vulnerable during training to graph poisoning attacks involving arbitrary combinations of edge, node, and node-feature perturbations. Existing defenses either lack theoretical guarantees or are restricted to single perturbation types, specific architectures, or tasks, suffering from insufficient robustness certification accuracy. This paper proposes PGNNCertβthe first architecture-agnostic GNN defense framework supporting provably robust certification against arbitrary joint perturbations (edges/nodes/features) with 100% deterministic certification accuracy. Its core innovation lies in integrating deterministic interval propagation with robust aggregation pruning, grounded in graph-structure sensitivity analysis and bounded feature propagation theory. This yields a general, formally verifiable certification mechanism. Extensive evaluation on multiple node- and graph-classification benchmarks demonstrates that PGNNCert significantly outperforms state-of-the-art certification methods limited to single-perturbation settings.
π Abstract
Graph neural networks (GNNs) are becoming the de facto method to learn on the graph data and have achieved the state-of-the-art on node and graph classification tasks. However, recent works show GNNs are vulnerable to training-time poisoning attacks -- marginally perturbing edges, nodes, or/and node features of training graph(s) can largely degrade GNNs' testing performance. Most previous defenses against graph poisoning attacks are empirical and are soon broken by adaptive / stronger ones. A few provable defenses provide robustness guarantees, but have large gaps when applied in practice: 1) restrict the attacker on only one type of perturbation; 2) design for a particular GNN architecture or task; and 3) robustness guarantees are not 100% accurate. In this work, we bridge all these gaps by developing PGNNCert, the first certified defense of GNNs against poisoning attacks under arbitrary (edge, node, and node feature) perturbations with deterministic robustness guarantees. Extensive evaluations on multiple node and graph classification datasets and GNNs demonstrate the effectiveness of PGNNCert to provably defend against arbitrary poisoning perturbations. PGNNCert is also shown to significantly outperform the state-of-the-art certified defenses against edge perturbation or node perturbation during GNN training.