🤖 AI Summary
This work addresses the inefficiency of conventional synchronous training in large-scale GPU clusters—specifically those with over 100,000 GPUs—where frequent hardware failures and prolonged recovery procedures severely degrade performance. The authors propose a fault-tolerant hybrid sharded data parallelism (FT-HSDP) paradigm that, for the first time, treats data-parallel replicas as independent fault-tolerance units. By integrating dynamic participant management and a non-blocking catch-up mechanism, FT-HSDP enables localized fault recovery: only affected replicas are restarted while others continue training uninterrupted. The system employs a CPU-coordinated, GPU-executed fault-tolerant all-reduce protocol (FTAR) that supports efficient asynchronous recovery without compromising model accuracy. Experiments on a 100,000-GPU cluster demonstrate that this approach reduces fault-induced training stalls from 10 minutes to 3 minutes and increases effective training time from 44% to 80%.
📝 Abstract
Large-scale training systems typically use synchronous training, requiring all GPUs to be healthy simultaneously. In our experience training on O(100K) GPUs, synchronous training results in a low efficiency due to frequent failures and long recovery time. To address this problem, we propose a novel training paradigm, Fault Tolerant Hybrid-Shared Data Parallelism (FT-HSDP). FT-HSDP uses data parallel replicas as units of fault tolerance. When failures occur, only a single data-parallel replica containing the failed GPU or server is taken offline and restarted, while the other replicas continue training. To realize this idea at scale, FT-HSDP incorporates several techniques: 1) We introduce a Fault Tolerant All Reduce (FTAR) protocol for gradient exchange across data parallel replicas. FTAR relies on the CPU to drive the complex control logic for tasks like adding or removing participants dynamically, and relies on GPU to perform data transfer for best performance. 2) We introduce a non-blocking catch-up protocol, allowing a recovering replica to join training with minimal stall. Compared with fully synchronous training at O(100K) GPUs, FT-HSDP can reduce the stall time due to failure recovery from 10 minutes to 3 minutes, increasing effective training time from 44\% to 80\%. We further demonstrate that FT-HSDP's asynchronous recovery does not bring any meaning degradation to the accuracy of the result model.