Subsampling Graphs with GNN Performance Guarantees

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the high annotation, storage, and computational overheads incurred by full-graph training in Graph Neural Networks (GNNs). We propose a model-agnostic and label-agnostic graph sub-sampling framework. Its core innovation lies in the first application of Tree Mover’s Distance (TMD) to quantify graph structural similarity, integrated with graph structural compression and unsupervised optimization to achieve topology- and semantics-preserving sub-sampling. Theoretically, we derive a tight upper bound on the generalization error induced by sub-sampling for GNNs. Empirically, our method significantly outperforms existing sub-sampling approaches across multiple benchmark datasets: using only 20–40% of nodes, it achieves over 95% of the performance attained by full-graph training, while substantially reducing annotation and computational costs. Notably, the framework is applicable in early development stages—prior to label acquisition or model selection—offering both strong theoretical guarantees and practical robustness.

Technology Category

Application Category

📝 Abstract
How can we subsample graph data so that a graph neural network (GNN) trained on the subsample achieves performance comparable to training on the full dataset? This question is of fundamental interest, as smaller datasets reduce labeling costs, storage requirements, and computational resources needed for training. Selecting an effective subset is challenging: a poorly chosen subsample can severely degrade model performance, and empirically testing multiple subsets for quality obviates the benefits of subsampling. Therefore, it is critical that subsampling comes with guarantees on model performance. In this work, we introduce new subsampling methods for graph datasets that leverage the Tree Mover's Distance to reduce both the number of graphs and the size of individual graphs. To our knowledge, our approach is the first that is supported by rigorous theoretical guarantees: we prove that training a GNN on the subsampled data results in a bounded increase in loss compared to training on the full dataset. Unlike existing methods, our approach is both model-agnostic, requiring minimal assumptions about the GNN architecture, and label-agnostic, eliminating the need to label the full training set. This enables subsampling early in the model development pipeline (before data annotation, model selection, and hyperparameter tuning) reducing costs and resources needed for storage, labeling, and training. We validate our theoretical results with experiments showing that our approach outperforms existing subsampling methods across multiple datasets.
Problem

Research questions and friction points this paper is trying to address.

Subsampling graph data efficiently
Ensuring GNN performance with guarantees
Reducing costs and resources in training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subsampling with Tree Mover's Distance
Model-agnostic graph neural networks
Label-agnostic early pipeline subsampling
🔎 Similar Papers
No similar papers found.