On the Relationship Between Robustness and Expressivity of Graph Neural Networks

๐Ÿ“… 2025-04-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates how bit-flip attacks (BFAs) degrade the expressive power of graph neural networks (GNNs)โ€”specifically, their ability to distinguish non-isomorphic graphs. We propose the first theoretical criterion for quantifying GNN expressivity loss under BFAs, establishing a unified analytical framework that jointly characterizes the interplay among architectural design, graph structural properties (e.g., homophily), and feature encoding schemes. We formally define and characterize GNN expressive fragility under BFAs and derive a theoretical lower bound on robustness within a formal robustness framework. Our methodology integrates graph theory, neural multiset function analysis, ReLU activation modeling, and empirical statistical validation. Theoretically, we show that ReLU-based GNNs are most vulnerable on highly homophilous graphs with low-dimensional or one-hot encoded node features. Extensive experiments across ten real-world datasets corroborate our theoretical predictions and yield actionable, implementation-ready guidelines for designing robust GNNs.

Technology Category

Application Category

๐Ÿ“ Abstract
We investigate the vulnerability of Graph Neural Networks (GNNs) to bit-flip attacks (BFAs) by introducing an analytical framework to study the influence of architectural features, graph properties, and their interaction. The expressivity of GNNs refers to their ability to distinguish non-isomorphic graphs and depends on the encoding of node neighborhoods. We examine the vulnerability of neural multiset functions commonly used for this purpose and establish formal criteria to characterize a GNN's susceptibility to losing expressivity due to BFAs. This enables an analysis of the impact of homophily, graph structural variety, feature encoding, and activation functions on GNN robustness. We derive theoretical bounds for the number of bit flips required to degrade GNN expressivity on a dataset, identifying ReLU-activated GNNs operating on highly homophilous graphs with low-dimensional or one-hot encoded features as particularly susceptible. Empirical results using ten real-world datasets confirm the statistical significance of our key theoretical insights and offer actionable results to mitigate BFA risks in expressivity-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Analyzing GNN vulnerability to bit-flip attacks (BFAs)
Establishing criteria for GNN expressivity loss from BFAs
Deriving theoretical bounds for BFA impact on GNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analytical framework for GNN vulnerability to BFAs
Theoretical bounds for expressivity degradation by BFAs
Empirical validation on ten real-world datasets
๐Ÿ”Ž Similar Papers
No similar papers found.