Malleus: Straggler-Resilient Hybrid Parallel Training of Large-scale Models via Malleable Data and Model Parallelization

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe efficiency degradation caused by dynamic GPU stragglers in hybrid parallel training, this paper proposes ElasticHybrid, an elastic hybrid parallel training framework. Methodologically, it introduces (1) the first real-time, per-GPU performance-aware dynamic parallelism planning algorithm, enabling joint and deformable parallelization across data, model, and pipeline dimensions; and (2) an integrated suite of techniques—including fine-grained performance monitoring, combinatorial optimization-based re-planning, incremental model state migration, elastic data sharding, and hierarchical scheduling—to support lossless, low-overhead runtime adaptation of parallel strategies. Evaluated on a 110B-parameter large language model, ElasticHybrid achieves 2.63×–5.28× higher training throughput than state-of-the-art frameworks under diverse straggler scenarios, while preserving convergence stability throughout training.

Technology Category

Application Category

📝 Abstract
As the scale of models and training data continues to grow, there is an expanding reliance on more GPUs to train large-scale models, which inevitably increases the likelihood of encountering dynamic stragglers that some devices lag behind in performance occasionally. However, hybrid parallel training, one of the de facto paradigms to train large models, is typically sensitive to the stragglers. This paper presents Malleus, a straggler-resilient hybrid parallel training framework for large-scale models. Malleus quantifies the stragglers at the nuanced, per-GPU granularity during training, and develops a novel planning algorithm to deduce the optimal parallelization of GPU devices, pipeline stages, model layers, and training data, maximizing training efficiency when stragglers exist. In addition, once a shift in the straggler situation is detected, Malleus adaptively adjusts the parallelization via a re-planning process, and seamlessly and efficiently migrates the model states on the fly, without sacrificing the stability of the training tasks. Empirical results on large language models with up to 110B parameters show that Malleus consistently outperforms existing parallel training frameworks under various straggler situations, delivering on average 2.63-5.28 times of efficiency improvement.
Problem

Research questions and friction points this paper is trying to address.

Addresses dynamic stragglers in hybrid parallel training of large models
Optimizes GPU parallelization for efficiency with stragglers present
Adaptively adjusts parallelization without disrupting training stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Per-GPU straggler quantification during training
Optimal parallelization planning algorithm
Dynamic model state migration
🔎 Similar Papers
No similar papers found.
H
Haoyang Li
Peking University
Fangcheng Fu
Fangcheng Fu
Shanghai Jiao Tong University
machine learningdeep learningMLSysdistributed computation
H
Hao Ge
Peking University
S
Sheng Lin
Peking University
X
Xuanyu Wang
Peking University
J
Jiawen Niu
Peking University
Y
Yujie Wang
Peking University
H
Hailin Zhang
Peking University
Xiaonan Nie
Xiaonan Nie
ByteDance Seed, Peking University
MLSysLLMDiTUnified Model
B
Bin Cui
Peking University