🤖 AI Summary
To address three critical limitations of NCCL in large-scale GPU training—inefficient peer-to-peer (P2P) communication, poor fault tolerance against RoCE NIC (RNIC) port failures, and difficulty in observing transient collective communication anomalies—this paper proposes ICCL, a novel communication library that is efficient, reliable, and highly observable. ICCL’s key innovations are: (1) offloading P2P communication from GPUs to CPU threads to free GPU SM resources; (2) a primary-backup queue pair (QP) mechanism enabling millisecond-level RNIC failover; and (3) microsecond-granularity sliding-window network monitoring for precise detection of transient anomalies. Experiments show ICCL improves P2P throughput by 23.4% and reduces latency by 28.5% over NCCL, yielding a 6.02% end-to-end training throughput gain. ICCL has operated stably in production for several months and is open-sourced.
📝 Abstract
Large-scale LLM training requires collective communication libraries to exchange data among distributed GPUs. As a company dedicated to building and operating large-scale GPU training clusters, we encounter several challenges when using NCCL in production, including 1) limited efficiency with costly and cumbersome P2P communication, 2) poor tolerance to frequent RNIC port failures, and 3) insufficient observability of transient collective communication anomalies. To address these issues, we propose ICCL, an efficient, reliable, and observable collective communication library in large-scale GPU training clusters. ICCL offloads the P2P communication from GPU kernels to CPU threads for minimal SM consumption, and removes the redundant memory copies irrelevant to the actual communicating process. ICCL also introduces a primary-backup QP mechanism to tolerate frequent NIC port failures, and designs a window-based monitor to observe network anomalies at O(us) level. We open-source ICCL and deploy it in production training clusters for several months, with results showing that compared to NCCL, ICCL achieves a 23.4%/28.5% improvement in P2P throughput/latency as well as a 6.02% increase in training throughput. We also share the operating experience of ICCL in large-scale clusters, hoping to give the communities more insights on production-level collective communication libraries in LLM training.