Collective Communication for 100k+ GPUs

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical bottlenecks—including low communication throughput, high latency, and poor scalability—in training and inference of large language models (LLMs) on GPU clusters exceeding 100,000 devices, this paper introduces NCCLX, the first high-performance collective communication framework specifically designed for ultra-large-scale LLMs. NCCLX features: (1) a hierarchical, topology-aware communication scheduler that adapts to heterogeneous network architectures; (2) a dynamic bandwidth optimization mechanism to maximize link utilization; and (3) hardware-coordinated fault-tolerant transmission, ensuring reliability at million-node scale. Evaluated on the Llama4 model across 100,000 GPUs, NCCLX achieves a 2.3× improvement in end-to-end communication efficiency and reduces latency by 41%, outperforming all existing industry solutions. These advances establish NCCLX as a scalable, robust communication infrastructure foundation for both ultra-large-model training and low-latency inference.

Technology Category

Application Category

📝 Abstract
The increasing scale of large language models (LLMs) necessitates highly efficient collective communication frameworks, particularly as training workloads extend to hundreds of thousands of GPUs. Traditional communication methods face significant throughput and latency limitations at this scale, hindering both the development and deployment of state-of-the-art models. This paper presents the NCCLX collective communication framework, developed at Meta, engineered to optimize performance across the full LLM lifecycle, from the synchronous demands of large-scale training to the low-latency requirements of inference. The framework is designed to support complex workloads on clusters exceeding 100,000 GPUs, ensuring reliable, high-throughput, and low-latency data exchange. Empirical evaluation on the Llama4 model demonstrates substantial improvements in communication efficiency. This research contributes a robust solution for enabling the next generation of LLMs to operate at unprecedented scales.
Problem

Research questions and friction points this paper is trying to address.

Optimizing collective communication for 100k+ GPU clusters
Overcoming throughput and latency limitations in LLM training
Enabling efficient data exchange across full LLM lifecycle
Innovation

Methods, ideas, or system contributions that make the work stand out.

NCCLX framework optimizes collective communication for GPUs
Supports complex workloads on 100k+ GPU clusters
Ensures reliable high-throughput low-latency data exchange
🔎 Similar Papers
No similar papers found.