🤖 AI Summary
Existing graph coarsening methods struggle to effectively preserve topological structure while reducing graph scale, often leading to degraded performance or excessive computational overhead in graph neural networks (GNNs). This work proposes STPGC, a scalable topology-preserving graph coarsening framework that, for the first time, integrates strong collapse and edge collapse from algebraic topology into graph coarsening, augmented with a neighborhood coning operation. STPGC efficiently eliminates redundant nodes and edges while rigorously preserving both algebraic topological features and the receptive field of GNNs. The method offers theoretical guarantees, scalability, and support for approximation-based acceleration strategies. Experimental results demonstrate that STPGC significantly improves coarsening efficiency and enhances GNN performance on node classification tasks, validating its effectiveness in maintaining topological fidelity while accelerating training.
📝 Abstract
Graph coarsening reduces the size of a graph while preserving certain properties. Most existing methods preserve either spectral or spatial characteristics. Recent research has shown that preserving topological features helps maintain the predictive performance of graph neural networks (GNNs) trained on the coarsened graph but suffers from exponential time complexity. To address these problems, we propose Scalable Topology-Preserving Graph Coarsening (STPGC) by introducing the concepts of graph strong collapse and graph edge collapse extended from algebraic topology. STPGC comprises three new algorithms, GStrongCollapse, GEdgeCollapse, and NeighborhoodConing based on these two concepts, which eliminate dominated nodes and edges while rigorously preserving topological features. We further prove that STPGC preserves the GNN receptive field and develop approximate algorithms to accelerate GNN training. Experiments on node classification with GNNs demonstrate the efficiency and effectiveness of STPGC.