🤖 AI Summary
Graph Convolutional Networks (GCNs) exhibit insufficient robustness against small perturbations in node features, limiting their deployment in safety-critical applications. This paper introduces polyhedral abstract interpretation—the first such approach—for certifying GCN robustness. To address the intrinsic coupling between graph structure and node features, we design a tight, differentiable polyhedral propagation mechanism that integrates interval arithmetic with explicit modeling of graph topological constraints, yielding theoretically sound certified upper and lower bounds for node classification. Our method balances certification tightness and computational efficiency, enabling end-to-end differentiable certification and robustness-aware training. Experiments demonstrate that our framework significantly improves both certified accuracy and certification speed, outperforming state-of-the-art abstract interpretation and randomized smoothing methods across multiple benchmark datasets.
📝 Abstract
Graph convolutional neural networks (GCNs) are powerful tools for learning graph-based knowledge representations from training data. However, they are vulnerable to small perturbations in the input graph, which makes them susceptible to input faults or adversarial attacks. This poses a significant problem for GCNs intended to be used in critical applications, which need to provide certifiably robust services even in the presence of adversarial perturbations. We propose an improved GCN robustness certification technique for node classification in the presence of node feature perturbations. We introduce a novel polyhedra-based abstract interpretation approach to tackle specific challenges of graph data and provide tight upper and lower bounds for the robustness of the GCN. Experiments show that our approach simultaneously improves the tightness of robustness bounds as well as the runtime performance of certification. Moreover, our method can be used during training to further improve the robustness of GCNs.