🤖 AI Summary
To address the poor scalability and slow convergence of Graph Neural Networks (GNNs) on large-scale graphs as depth increases, this paper proposes a scalable and theoretically convergent GNN layer design. Our method constructs a novel sampling-based graph regularization energy function and iteratively minimizes it during forward propagation, jointly optimizing downstream tasks (e.g., node classification) and interpretable embeddings. By integrating stochastic sampling approximation, implicit layer unfolding, and rigorous convergence analysis, we achieve efficient and stable deep iterative convergence on ultra-large graphs (>1 TB) for the first time. On the largest public node classification benchmark, our approach achieves state-of-the-art accuracy, reduces training memory consumption by 57%, accelerates inference by 3.2×, and enables end-to-end training on billion-node graphs.
📝 Abstract
Among the many variants of graph neural network (GNN) architectures capable of modeling data with cross-instance relations, an important subclass involves layers designed such that the forward pass iteratively reduces a graph-regularized energy function of interest. In this way, node embeddings produced at the output layer dually serve as both predictive features for solving downstream tasks (e.g., node classification) and energy function minimizers that inherit transparent, exploitable inductive biases and interpretability. However, scaling GNN architectures constructed in this way remains challenging, in part because the convergence of the forward pass may involve models with considerable depth. To tackle this limitation, we propose a sampling-based energy function and scalable GNN layers that iteratively reduce it, guided by convergence guarantees in certain settings. We also instantiate a full GNN architecture based on these designs, and the model achieves competitive accuracy and scalability when applied to the largest publicly-available node classification benchmark exceeding 1TB in size. Our source code is available at https://github.com/haitian-jiang/MuseGNN.