Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing distributed graph neural network (GNN) mini-batch training is hindered by the high overhead of neighborhood sampling and scalability bottlenecks in data parallelism. This work proposes ScaleGNN, a framework that introduces a novel communication-free uniform vertex sampling algorithm and integrates it into a four-dimensional hybrid parallel architecture combining communication-free sampling, 3D parallel matrix multiplication, low-precision communication, kernel fusion, and communication-computation overlap. The resulting system substantially improves training efficiency and scalability, achieving end-to-end speedups of up to 3.5× over the current state-of-the-art baseline on the ogbn-products dataset. ScaleGNN demonstrates strong scalability across diverse supercomputing platforms, scaling to 2,048 devices on both Perlmutter and Frontier, and 1,024 devices on Tuolumne.
📝 Abstract
Graph neural networks (GNNs) are widely used for learning on graph datasets derived from various real-world scenarios. Learning from extremely large graphs requires distributed training, and mini-batching with sampling is a popular approach for parallelizing GNN training. Existing distributed mini-batch approaches have significant performance bottlenecks due to expensive sampling methods and limited scaling when using data parallelism. In this work, we present ScaleGNN, a 4D parallel framework for scalable mini-batch GNN training that combines communication-free distributed sampling, 3D parallel matrix multiplication (PMM), and data parallelism. ScaleGNN introduces a uniform vertex sampling algorithm, enabling each process (GPU device) to construct its local mini-batch, i.e., subgraph partitions without any inter-process communication. 3D PMM enables scaling mini-batch training to much larger GPU counts than vanilla data parallelism with significantly lower communication overheads. We also present additional optimizations to overlap sampling with training, reduce communication overhead by sending data in lower precision, kernel fusion, and communication-computation overlap. We evaluate ScaleGNN on five graph datasets and demonstrate strong scaling up to 2048 GPUs on Perlmutter, 2048 GCDs on Frontier, and 1024 GPUs on Tuolumne. On Perlmutter, ScaleGNN achieves 3.5x end-to-end training speedup over the SOTA baseline on ogbn-products.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Distributed Training
Mini-batch Sampling
Scalability
Communication Overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

communication-free sampling
4D hybrid parallelism
distributed GNN training
3D parallel matrix multiplication
mini-batch graph learning