Connectivity-Guided Sparsification of 2-FWL GNNs: Preserving Full Expressivity with Improved Efficiency

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Higher-order graph neural networks (HOGNNs) achieve 2-FWL expressive power but incur O(n³) computational complexity due to three-node interactions; existing acceleration techniques often compromise expressivity. Method: We propose Co-Sparsify, the first framework to identify that three-node interactions are expressively necessary only within biconnected components (BCCs). Leveraging this insight, we design a structure-aware sparsification strategy that retains higher-order interactions exclusively inside BCCs—avoiding approximation or sampling. The method integrates 2-FWL message passing, BCC decomposition, connectivity-aware partitioning, and global readout. Contribution/Results: Co-Sparsify provably preserves 2-FWL equivalence while drastically reducing memory and computation. Experiments on ZINC, QM9, and other benchmarks demonstrate superior predictive performance over state-of-the-art HOGNNs, alongside significant efficiency gains.

Technology Category

Application Category

📝 Abstract
Higher-order Graph Neural Networks (HOGNNs) based on the 2-FWL test achieve superior expressivity by modeling 2- and 3-node interactions, but at $mathcal{O}(n^3)$ computational cost. However, this computational burden is typically mitigated by existing efficiency methods at the cost of reduced expressivity. We propose extbf{Co-Sparsify}, a connectivity-aware sparsification framework that eliminates emph{provably redundant} computations while preserving full 2-FWL expressive power. Our key insight is that 3-node interactions are expressively necessary only within emph{biconnected components} -- maximal subgraphs where every pair of nodes lies on a cycle. Outside these components, structural relationships can be fully captured via 2-node message passing or global readout, rendering higher-order modeling unnecessary. Co-Sparsify restricts 2-node message passing to connected components and 3-node interactions to biconnected ones, removing computation without approximation or sampling. We prove that Co-Sparsified GNNs are as expressive as the 2-FWL test. Empirically, on PPGN, Co-Sparsify matches or exceeds accuracy on synthetic substructure counting tasks and achieves state-of-the-art performance on real-world benchmarks (ZINC, QM9). This study demonstrates that high expressivity and scalability are not mutually exclusive: principled, topology-guided sparsification enables powerful, efficient GNNs with theoretical guarantees.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity of higher-order GNNs while maintaining expressivity
Eliminating redundant computations in 2-FWL GNNs through connectivity analysis
Preserving full expressive power while improving efficiency via topology-guided sparsification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsifies GNNs using connectivity-guided framework
Restricts 3-node interactions to biconnected components
Preserves full expressivity while improving efficiency
🔎 Similar Papers
No similar papers found.
Rongqin Chen
Rongqin Chen
University of Macau
F
Fan Mo
ZOZO Research, Waseda University
Pak Lon Ip
Pak Lon Ip
University of Macau
S
Shenghui Zhang
University of Macau
D
Dan Wu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Y
Ye Li
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Faculty of Computer Science and Control Engineering, Shenzhen University of Advanced Technology
Leong Hou U
Leong Hou U
University of Macau
Spatial and Spatio-Temporal DatabasesData VisualizationGraph LearningReinforcement Learning