Nonuniform-Tensor-Parallelism: Mitigating GPU failure impact for Scaled-up LLM Training

๐Ÿ“… 2025-04-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In large-scale LLM training across tens of thousands of GPUs, single-GPU failures severely degrade throughput in the tensor parallelism (TP) domain due to rigid TP degree constraints. Method: This paper proposes Non-uniform Tensor Parallelism (NTP), a fault-aware parallelization mechanism that dynamically reduces the TP degree for affected data parallelism (DP) replicas upon GPU failure, enabling adaptive throughput matching. It jointly designs a fault-tolerant rack architecture featuring high-redundancy power delivery and thermal management, supporting localized power boosting and NVLink scale-up domain optimization. Contribution/Results: NTP introduces the first โ€œvariable-TP-degreeโ€ non-uniform parallelism paradigm, integrating failure-aware DP scheduling with hardware-level fault tolerance. Experiments show that under a 0.1% GPU failure rate, training throughput degradation is reduced from nearly 10% to โ‰ˆ0%, significantly improving robustness and effective computational utilization in massive-scale training.

Technology Category

Application Category

๐Ÿ“ Abstract
LLM training is scaled up to 10Ks of GPUs by a mix of data-(DP) and model-parallel (MP) execution. Critical to achieving efficiency is tensor-parallel (TP; a form of MP) execution within tightly-coupled subsets of GPUs, referred to as a scale-up domain, and the larger the scale-up domain the better the performance. New datacenter architectures are emerging with more GPUs able to be tightly-coupled in a scale-up domain, such as moving from 8 GPUs to 72 GPUs connected via NVLink. Unfortunately, larger scale-up domains increase the blast-radius of failures, with a failure of single GPU potentially impacting TP execution on the full scale-up domain, which can degrade overall LLM training throughput dramatically. With as few as 0.1% of GPUs being in a failed state, a high TP-degree job can experience nearly 10% reduction in LLM training throughput. We propose nonuniform-tensor-parallelism (NTP) to mitigate this amplified impact of GPU failures. In NTP, a DP replica that experiences GPU failures operates at a reduced TP degree, contributing throughput equal to the percentage of still-functional GPUs. We also propose a rack-design with improved electrical and thermal capabilities in order to sustain power-boosting of scale-up domains that have experienced failures; combined with NTP, this can allow the DP replica with the reduced TP degree (i.e., with failed GPUs) to keep up with the others, thereby achieving near-zero throughput loss for large-scale LLM training.
Problem

Research questions and friction points this paper is trying to address.

Mitigating GPU failure impact on large-scale LLM training
Reducing throughput loss from failures in tensor-parallel execution
Improving fault tolerance in scaled-up GPU domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonuniform-Tensor-Parallelism reduces failure impact
Rack-design enhances electrical and thermal capabilities
DP replica operates at reduced TP degree
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Daiyaan Arfeen
Carnegie Mellon University
Dheevatsa Mudigere
Dheevatsa Mudigere
Distinguished Engineer, NVIDIA
Scientific computingDeep learningApplied numerical methodsHigh performance computingCFD
A
Ankit More
NVIDIA
Bhargava Gopireddy
Bhargava Gopireddy
Nvidia
Computer Architecture
A
Ahmet Inci
NVIDIA
G
Gregory R. Ganger
Carnegie Mellon University