Not All Neighbors Matter: Understanding the Impact of Graph Sparsification on GNN Pipelines

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic evaluation of graph sparsification as a lightweight preprocessing step to mitigate the severe data movement and computational bottlenecks incurred by multi-hop neighborhood traversal in large-scale graph neural networks (GNNs) during training and inference on billion-edge graphs. We develop a scalable experimental framework that integrates diverse sparsification strategies—including random sparsification and K-Neighbor—with prominent GNN architectures such as GAT, conducting end-to-end analysis on node classification tasks. Our results demonstrate that sparsification not only preserves but can even improve model accuracy—e.g., a 6.8% accuracy gain for GAT on PubMed—while substantially accelerating inference, achieving up to 11.7× speedup on the Products graph with only a 0.7% accuracy drop. Moreover, the acceleration benefits intensify with graph scale, and the preprocessing overhead is rapidly amortized.

Technology Category

Application Category

📝 Abstract
As graphs scale to billions of nodes and edges, graph Machine Learning workloads are constrained by the cost of multi-hop traversals over exponentially growing neighborhoods. While various system-level and algorithmic optimizations have been proposed to accelerate Graph Neural Network (GNN) pipelines, data management and movement remain the primary bottlenecks at scale. In this paper, we explore whether graph sparsification, a well-established technique that reduces edges to create sparser neighborhoods, can serve as a lightweight pre-processing step to address these bottlenecks while preserving accuracy on node classification tasks. We develop an extensible experimental framework that enables systematic evaluation of how different sparsification methods affect the performance and accuracy of GNN models. We conduct the first comprehensive study of GNN training and inference on sparsified graphs, revealing several key findings. First, sparsification often preserves or even improves predictive performance. As an example, random sparsification raises the accuracy of the GAT model by 6.8% on the PubMed graph. Second, benefits increase with scale, substantially accelerating both training and inference. Our results show that the K-Neighbor sparsifier improves model serving performance on the Products graph by 11.7x with only a 0.7% accuracy drop. Importantly, we find that the computational overhead of sparsification is quickly amortized, making it practical for very large graphs.
Problem

Research questions and friction points this paper is trying to address.

Graph Sparsification
Graph Neural Networks
Scalability
Node Classification
Data Movement Bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph sparsification
Graph Neural Networks
scalability
node classification
performance-accuracy trade-off
🔎 Similar Papers
Y
Yuhang Song
Boston University
N
Naima Abrar Shami
Boston University
R
Romaric Duvignau
Chalmers University of Technology
Vasiliki Kalavri
Vasiliki Kalavri
Boston University
Stream ProcessingDistributed SystemsSystems for Secure ComputationSystems for ML