Adaptra: Straggler-Resilient Hybrid-Parallel Training with Pipeline Adaptation

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large-scale DNN training, communication stragglers induce pipeline “bubbles” and GPU kernel-level head-of-line blocking, severely degrading training efficiency. Method: This paper proposes a synergistic optimization framework integrating dynamic adaptive pipeline scheduling with CPU-side RDMA offloading. It introduces the first analytical model-driven, real-time pipeline schedule reconfiguration mechanism and pioneers offloading critical communication operations from the GPU to the CPU via RDMA—thereby fundamentally eliminating GPU compute-core stalls caused by slow communication. The approach unifies hybrid parallel training, online straggler detection, and analytical performance modeling. Contribution/Results: Across diverse scenarios, it achieves 1.2×–3.5× reduction in per-iteration training time, effectively suppresses cascading bubbles and computational stalls, and significantly enhances communication robustness and resource utilization in large-scale distributed training systems.

Technology Category

Application Category

📝 Abstract
Training large Deep Neural Network (DNN) models at scale often encounters straggler issues, mostly in communications due to network congestion, RNIC/switch defects, or topological asymmetry. Under advanced pipeline parallelism, even minor communication delays can induce significant training slowdowns. This occurs because (1) slow communication disrupts the pipeline schedule, creating cascading"bubbles"in a domino effect, and (2) current GPU kernel scheduling is susceptible to head-of-line blocking, where slow communication blocks subsequent computations, further adding to these bubbles. To address these challenges, we present ADAPTRA, a straggler-resilient training system with two key optimizations. First, it optimally adapts the pipeline schedule in the presence of stragglers to absorb communication delays without inducing cascading bubbles, using a simple yet effective algorithm guided by an analytical model. Second, upon detecting slow communication, ADAPTRA offloads communication operations from GPU to host memory and utilizes CPU-side RDMA for data transfer. This eliminates head-of-line blocking as subsequent computation kernels can be scheduled immediately on GPUs. Together, these optimizations effectively reduce pipeline stalls in the presence of communication stragglers, improving the training iteration time by 1.2-3.5x in our experiments under various settings.
Problem

Research questions and friction points this paper is trying to address.

Addresses straggler-induced pipeline slowdowns in DNN training
Mitigates communication delays disrupting pipeline schedules
Eliminates head-of-line blocking in GPU kernel scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts pipeline schedule to absorb communication delays
Offloads communication to CPU using RDMA
Reduces pipeline stalls from communication stragglers
Tianyuan Wu
Tianyuan Wu
CSE Department, HKUST
ML SystemsReinforcement Learning
L
Lunxi Cao
Hong Kong University of Science and Technology
Hanfeng Lu
Hanfeng Lu
HKUST
mlsyssystems
X
Xiaoxiao Jiang
Hong Kong University of Science and Technology
Yinghao Yu
Yinghao Yu
Engineer, Alibaba
Resource management in containerized clustersGeneration optimizations for distributed systems
S
Siran Yang
Alibaba Group
G
Guodong Yang
Alibaba Group
J
Jiamang Wang
Alibaba Group
L
Lin Qu
Alibaba Group
L
Liping Zhang
Alibaba Group
W
Wei Wang
Hong Kong University of Science and Technology