Plexus: Taming Billion-edge Graphs with 3D Parallel GNN Training

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training graph neural networks (GNNs) on billion-edge graphs faces critical challenges: GPU memory limitations, accuracy degradation from sampling, CPU-GPU data transfer overhead, and high communication costs with load imbalance in distributed training. This paper introduces the first three-dimensional (3D) parallel training paradigm for full-graph GNNs, jointly partitioning nodes, edges, and feature dimensions. We propose a dynamic permutation-based load balancing strategy and a hardware-aware 3D configuration performance prediction model, integrated with graph data redistribution and communication-computation overlap. Evaluated on Perlmutter (2,048 GPUs) and Frontier (2,048 GCDs), our approach achieves 2.3–12.5× speedup in training throughput and reduces end-to-end training time by 5.2–54.2×, significantly enhancing scalability and efficiency for ultra-large-scale graph GNN training.

Technology Category

Application Category

📝 Abstract
Graph neural networks have emerged as a potent class of neural networks capable of leveraging the connectivity and structure of real-world graphs to learn intricate properties and relationships between nodes. Many real-world graphs exceed the memory capacity of a GPU due to their sheer size, and using GNNs on them requires techniques such as mini-batch sampling to scale. However, this can lead to reduced accuracy in some cases, and sampling and data transfer from the CPU to the GPU can also slow down training. On the other hand, distributed full-graph training suffers from high communication overhead and load imbalance due to the irregular structure of graphs. We propose Plexus, a three-dimensional (3D) parallel approach for full-graph training that tackles these issues and scales to billion-edge graphs. Additionally, we introduce optimizations such as a permutation scheme for load balancing, and a performance model to predict the optimal 3D configuration. We evaluate Plexus on several graph datasets and show scaling results for up to 2048 GPUs on Perlmutter, which is 33% of the machine, and 2048 GCDs on Frontier. Plexus achieves unprecedented speedups of 2.3x-12.5x over existing methods and a reduction in the time to solution by 5.2-8.7x on Perlmutter and 7-54.2x on Frontier.
Problem

Research questions and friction points this paper is trying to address.

Scaling GNN training for billion-edge graphs efficiently
Reducing communication overhead in distributed full-graph training
Balancing load and optimizing performance in 3D parallel training
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D parallel approach for full-graph training
Permutation scheme for load balancing
Performance model for optimal 3D configuration
🔎 Similar Papers
No similar papers found.
A
Aditya Ranjan
Department of Computer Science, University of Maryland
Siddharth Singh
Siddharth Singh
Research Scientist at Nvidia
High Performance ComputingArtificial Intelligence
C
Cunyang Wei
Department of Computer Science, University of Maryland
A
A. Bhatele
Department of Computer Science, University of Maryland