🤖 AI Summary
This work addresses the scalability bottleneck in modern GPU clusters caused by skewed communication traffic, which leads to link congestion and uneven resource utilization. The authors propose NIMBLE, a system that dynamically orchestrates multi-path communication across intra-node NVLink and inter-node networks at runtime. NIMBLE achieves transparent, sequentially consistent load balancing through a capacity-normalized minimum-congestion optimization model, requiring no application modifications and seamlessly integrating with existing communication libraries. It employs a multiplicative weight update algorithm to solve the optimization problem and leverages CUDA-aware GPU kernels to drive an RDMA pipeline, intelligently scheduling intermediate GPUs and NICs for routing. Evaluated on an H100-SXM4 cluster, NIMBLE improves intra-node bandwidth by 2.3× and inter-node throughput by 3.8× over single-path baselines, outperforms NCCL and MPI by up to 5.2× under skewed All-to-Allv workloads, and accelerates MoE large model training by 1.35×.
📝 Abstract
Modern GPU-based high-performance computing clusters offer unprecedented communication bandwidth through heterogeneous intra-node interconnects and inter-node networks. However, despite this high aggregate bandwidth, many real-world communication patterns fail to fully utilize the available hardware. Traffic skew often leads to situations where a small subset of links becomes oversaturated while others remain underutilized, resulting in congestion, latency spikes, and poor scalability. Existing communication frameworks such as NCCL and MPI with UCX typically rely on static fastest-path routing or hashing-based multi-rail striping, which leaves significant bandwidth unused when runtime traffic deviates from expected distributions. To address these limitations, we propose NIMBLE (Node-Interconnect Multi-path Balancing with Execution-time orchestration), a runtime communication orchestration system that dynamically redistributes traffic to balance link utilization across all available intra-node and inter-node paths. NIMBLE formulates this as a capacity-normalized minimum-congestion optimization problem and solves it efficiently using a multiplicative-weights algorithm. It further employs CUDA-aware GPU kernel-based RDMA pipelining to route traffic through intermediate GPUs and rail-matched NICs. The system is endpoint-driven, integrates transparently with existing communication libraries without requiring application changes, and preserves ordering, determinism, and low overhead. On H100-SXM4 nodes with fully connected NVLink and four NDR400 rails, NIMBLE achieves up to 2.3x higher intra-node bandwidth and 3.8x higher inter-node throughput compared to single-path baselines. It outperforms NCCL and MPI by up to 5.2x on skewed All-to-Allv workloads and 1.35x on end-to-end LLM MoE workloads, while matching baseline performance under balanced traffic.