GSplit: Scaling Graph Neural Network Training on Large Graphs via Split-Parallelism

📅 2023-03-24
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficiencies in data-parallel GNN training on large-scale graphs—including redundant subgraph sampling, repeated feature loading, and low GPU utilization—this paper proposes split-parallelism, a novel hybrid parallel paradigm. It dynamically partitions mini-batch sampling and model training at the iteration level across GPUs, enabling the first redundancy-free online sharding parallelism. The method integrates lightweight split scheduling, optimized subgraph sampling, and a feature caching mechanism, ensuring full compatibility with PyTorch and mainstream GNN architectures. Extensive experiments on multiple large-scale graph datasets demonstrate that our approach achieves up to 2.3× higher GPU utilization and 1.8–3.1× end-to-end training speedup over state-of-the-art systems (DGL, Quiver, and P³), significantly alleviating both I/O and computational bottlenecks in large-graph GNN training.
📝 Abstract
Graph neural networks (GNNs), an emerging class of machine learning models for graphs, have gained popularity for their superior performance in various graph analytical tasks. Mini-batch training is commonly used to train GNNs on large graphs, and data parallelism is the standard approach to scale mini-batch training across multiple GPUs. One of the major performance costs in GNN training is the loading of input features, which prevents GPUs from being fully utilized. In this paper, we argue that this problem is exacerbated by redundancies that are inherent to the data parallel approach. To address this issue, we introduce a hybrid parallel mini-batch training paradigm called split parallelism. Split parallelism avoids redundant data loads and splits the sampling and training of each mini-batch across multiple GPUs online, at each iteration, using a lightweight splitting algorithm. We implement split parallelism in GSplit and show that it outperforms state-of-the-art mini-batch training systems like DGL, Quiver, and $P^3$.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant subgraph sampling in GNN training
Minimizes communication overheads in split parallelism
Improves scalability of mini-batch training on large graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces split parallelism to avoid redundant subgraph sampling
Uses lightweight partitioning to minimize communication overheads
Implements hybrid parallel training across multiple GPUs efficiently
🔎 Similar Papers
No similar papers found.
Sandeep Polisetty
Sandeep Polisetty
Student, University of Massachusetts, Amherst
Systems for Machine Learning
J
Juelin Liu
University of Massachusetts, Amherst
K
Kobi Falus
University of Massachusetts, Amherst
Y
Y. Fung
University of Illinois Urbana-Champaign
Seung-Hwan Lim
Seung-Hwan Lim
Oak Ridge National Laboratory
Machine LearningGraph AnalysisParallel and Distributed Systems
Hui Guan
Hui Guan
UMass Amherst
Machine Learning Systems
M
M. Serafini
University of Massachusetts, Amherst