ForestColl: Throughput-Optimal Collective Communications on Heterogeneous Network Fabrics

📅 2024-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address throughput bottlenecks in collective communication (e.g., all-reduce) during large language model training over heterogeneous interconnects—such as hybrid switch fabrics and direct-attached accelerators—this paper proposes the first theoretically optimal broadcast/aggregation tree scheduling framework supporting arbitrary topologies with strongly polynomial-time solvability. Our method constructs throughput-optimal spanning trees via graph-theoretic modeling, establishes a topology-agnostic universal scheduling formulation, and introduces a lightweight runtime adaptation layer. Evaluated on AMD MI250 and NVIDIA DGX A100 platforms, the framework achieves significant throughput improvements over RCCL/NCCL, yielding measurable LLM training acceleration. Moreover, its scheduling generation is both faster and higher-quality than current state-of-the-art methods, uniquely combining provable optimality with practical deployability.

Technology Category

Application Category

📝 Abstract
As modern DNN models grow ever larger, collective communications between the accelerators (allreduce, etc.) emerge as a significant performance bottleneck. Designing efficient communication schedules is challenging, given today's heterogeneous and diverse network fabrics. We present ForestColl, a tool that generates throughput-optimal schedules for any network topology. ForestColl constructs broadcast/aggregation spanning trees as the communication schedule, achieving theoretical optimality. Its schedule generation runs in strongly polynomial time and is highly scalable. ForestColl supports any network fabrics, including both switching fabrics and direct accelerator connections. We evaluated ForestColl on multi-box AMD MI250 and NVIDIA DGX A100 platforms. ForestColl showed significant improvements over the vendors' own optimized communication libraries, RCCL and NCCL, across various settings and in LLM training. ForestColl also outperformed other state-of-the-art schedule generation techniques with both more efficient generated schedules and substantially faster schedule generation speed.
Problem

Research questions and friction points this paper is trying to address.

Optimal Communication Methods
Information Transfer Speed
Large-scale Model Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

ForestColl
Optimized Collective Communication
High-Performance Networking
🔎 Similar Papers
No similar papers found.