🤖 AI Summary
This paper addresses the challenges of high computational overhead, overfitting, over-compression, and over-smoothing in graph neural networks (GNNs) for few-shot node classification. We propose GERN, a progressive graph reconstruction framework grounded in effective resistance. GERN is the first method to leverage random spanning tree sampling and path-graph transformation to progressively reconstruct the original graph into a sequence of topology-preserving, sparse random path graphs—replacing the original graph for GCN training. By preserving structural fidelity while drastically reducing graph density, GERN simultaneously accelerates training and enhances generalization. Extensive experiments on multiple real-world graph datasets demonstrate up to 5.2× speedup in training time and an average accuracy gain of 1.8%. Notably, GERN achieves superior performance under extreme few-shot settings (label rate < 5%), highlighting its robustness and efficacy in data-scarce scenarios.
📝 Abstract
We present GERN, a novel scalable framework for training GNNs in node classification tasks, based on effective resistance, a standard tool in spectral graph theory. Our method progressively refines the GNN weights on a sequence of random spanning trees suitably transformed into path graphs which, despite their simplicity, are shown to retain essential topological and node information of the original input graph. The sparse nature of these path graphs substantially lightens the computational burden of GNN training. This not only enhances scalability but also improves accuracy in subsequent test phases, especially under small training set regimes, which are of great practical importance, as in many real-world scenarios labels may be hard to obtain. In these settings, our framework yields very good results as it effectively counters the training deterioration caused by overfitting when the training set is small. Our method also addresses common issues like over-squashing and over-smoothing while avoiding under-reaching phenomena. Although our framework is flexible and can be deployed in several types of GNNs, in this paper we focus on graph convolutional networks and carry out an extensive experimental investigation on a number of real-world graph benchmarks, where we achieve simultaneous improvement of training speed and test accuracy over a wide pool of representative baselines.