🤖 AI Summary
This work addresses the robustness of multi-step message passing in Graph Convolutional Networks (GCNs) under joint uncertainty in node features and graph structure. We propose the first formal verification framework for GCNs that explicitly models and preserves critical nonlinear dependencies—unlike existing methods that ignore non-convex dependencies. Our approach introduces a reachability analysis framework based on matrix-polynomial zonotopes, enabling joint modeling of structural perturbations and feature uncertainties. Evaluated on Cora, Citeseer, and PubMed, the method achieves high precision and scalability in robustness verification, significantly improving both verification tightness and computational efficiency. It bridges a fundamental theoretical and technical gap in formal verification for general GCNs, thereby providing rigorous safety guarantees for safety-critical graph learning applications.
📝 Abstract
Graph neural networks are becoming increasingly popular in the field of machine learning due to their unique ability to process data structured in graphs. They have also been applied in safety-critical environments where perturbations inherently occur. However, these perturbations require us to formally verify neural networks before their deployment in safety-critical environments as neural networks are prone to adversarial attacks. While there exists research on the formal verification of neural networks, there is no work verifying the robustness of generic graph convolutional network architectures with uncertainty in the node features and in the graph structure over multiple message-passing steps. This work addresses this research gap by explicitly preserving the non-convex dependencies of all elements in the underlying computations through reachability analysis with (matrix) polynomial zonotopes. We demonstrate our approach on three popular benchmark datasets.