🤖 AI Summary
In large-scale LLM distributed training, RDMA network failures often cause complete job aborts, while existing application-layer fault tolerance (e.g., checkpointing) incurs disruptive training interruptions.
Method: This work proposes, for the first time, an RDMA-layer fine-grained failure state machine and transparent rerouting mechanism that seamlessly redirects RDMA traffic across NICs—without modifying applications (e.g., PyTorch/NCCL) or training code. The solution requires only lightweight driver-level modifications.
Contribution/Results: It enables zero-interruption training during faults, reducing progress loss to just 8% until the next checkpoint—a 92% reduction versus conventional approaches—while imposing less than 1.2% overhead on the data path. This bridges a critical gap between network- and application-layer fault tolerance, delivering truly transparent, zero-modification, RDMA-native resilience.
📝 Abstract
With gang scheduling in large-scale distributed Large Language Model training, a single network anomaly can propagate and cause complete task failure. The frequency of such anomalies increases with network scale. However, existing fault-tolerance mechanisms, such as checkpointing and runtime resilience methods, primarily operate at the application layer and inevitably cause disruptions in training progress.
We propose to address this challenge by introducing fault tolerance at the Remote Direct Memory Access (RDMA) layer and integrating it with existing application-layer techniques. We present SHIFT, a fault-resilient layer over RDMA that enables seamless redirection of RDMA traffic across different intra-host NICs. By allowing applications to continue execution in the presence of network anomalies until the next checkpoint, SHIFT effectively minimizes training progress loss. SHIFT is designed to be application-agnostic, transparent to applications, and low-overhead.
Through a carefully designed failure state machine and control flow, unmodified applications such as PyTorch with NCCL can run with RDMA-level fault tolerance. Experimental results demonstrate that SHIFT introduces minimal data path overhead while ensuring application continuity under network failures.