🤖 AI Summary
All-to-All communication in GPU clusters suffers from incast congestion, straggler effects, and high scheduling overhead due to heterogeneous interconnects (e.g., NVLink and Ethernet).
Method: We propose the first lightweight scheduling framework that simultaneously achieves theoretical near-optimality and practical low overhead. It introduces a hierarchical network model to decouple intra- and inter-node communication, a polynomial-time algorithm to maximize bottleneck-link utilization, and a background GPU-to-GPU data pre-migration mechanism.
Contribution/Results: We theoretically prove that, under high-speed intra-node networks, our approach asymptotically approaches the optimal completion time with negligible computational overhead. Experiments show that our method achieves All-to-All completion times comparable to exact solvers like TACCL, while reducing scheduling latency by 3–4 orders of magnitude—significantly outperforming existing heuristic and optimization-based schedulers.
📝 Abstract
Scheduling All-to-All communications efficiently is fundamental to minimizing job completion times in distributed systems. Incast and straggler flows can slow down All-to-All transfers; and GPU clusters bring additional straggler challenges due to highly heterogeneous link capacities between technologies like NVLink and Ethernet. Existing schedulers all suffer high overheads relative to theoretically optimal transfers. Classical, simple scheduling algorithms such as SpreadOut fail to minimize transfer completion times; modern optimization-based schedulers such as TACCL achieve better completion times but with computation times that can be orders of magnitude longer than the transfer itself. This paper presents FLASH, which schedules near-optimal All-to-All transfers with a simple, polynomial time algorithm. FLASH keeps the bottleneck inter-server network maximally utilized and, in the background, shuffles data between GPUs over fast intra-server networks to mitigate stragglers. We prove that, so long as intra-server networks are significantly faster than inter-server networks, FLASH approaches near-optimal transfer completion times. We implement FLASH and demonstrate that its computational overheads are negligible, yet it achieves transfer completion times that are comparable to state-of-the-art solver-based schedulers.