🤖 AI Summary
To address memory explosion and cross-machine communication bottlenecks in distributed training of Graph Neural Networks (GNNs) on billion-scale graphs, this paper proposes Armada, an end-to-end system. Methodologically, it introduces (1) GREM, the first theory-driven streaming graph partitioning algorithm that dynamically assigns vertices to minimize edge cuts, and (2) a decoupled distributed training architecture that separates graph sampling, feature loading, and computation scheduling to improve resource utilization. Experimental results demonstrate that GREM reduces memory consumption by 8–65× and accelerates partitioning by 8–46× compared to METIS. End-to-end, Armada achieves up to 4.5× faster GNN training and reduces hardware cost by 3.1×, significantly advancing the scalability frontier for large-scale distributed GNN training.
📝 Abstract
We study distributed training of Graph Neural Networks (GNNs) on billion-scale graphs that are partitioned across machines. Efficient training in this setting relies on min-edge-cut partitioning algorithms, which minimize cross-machine communication due to GNN neighborhood sampling. Yet, min-edge-cut partitioning over large graphs remains a challenge: State-of-the-art (SoTA) offline methods (e.g., METIS) are effective, but they require orders of magnitude more memory and runtime than GNN training itself, while computationally efficient algorithms (e.g., streaming greedy approaches) suffer from increased edge cuts. Thus, in this work we introduce Armada, a new end-to-end system for distributed GNN training whose key contribution is GREM, a novel min-edge-cut partitioning algorithm that can efficiently scale to large graphs. GREM builds on streaming greedy approaches with one key addition: prior vertex assignments are continuously refined during streaming, rather than frozen after an initial greedy selection. Our theoretical analysis and experimental results show that this refinement is critical to minimizing edge cuts and enables GREM to reach partition quality comparable to METIS but with 8-65x less memory and 8-46x faster. Given a partitioned graph, Armada leverages a new disaggregated architecture for distributed GNN training to further improve efficiency; we find that on common cloud machines, even with zero communication, GNN neighborhood sampling and feature loading bottleneck training. Disaggregation allows Armada to independently allocate resources for these operations and ensure that expensive GPUs remain saturated with computation. We evaluate Armada against SoTA systems for distributed GNN training and find that the disaggregated architecture leads to runtime improvements up to 4.5x and cost reductions up to 3.1x.