🤖 AI Summary
To address the severe efficiency degradation caused by dynamic GPU stragglers in hybrid parallel training, this paper proposes ElasticHybrid, an elastic hybrid parallel training framework. Methodologically, it introduces (1) the first real-time, per-GPU performance-aware dynamic parallelism planning algorithm, enabling joint and deformable parallelization across data, model, and pipeline dimensions; and (2) an integrated suite of techniques—including fine-grained performance monitoring, combinatorial optimization-based re-planning, incremental model state migration, elastic data sharding, and hierarchical scheduling—to support lossless, low-overhead runtime adaptation of parallel strategies. Evaluated on a 110B-parameter large language model, ElasticHybrid achieves 2.63×–5.28× higher training throughput than state-of-the-art frameworks under diverse straggler scenarios, while preserving convergence stability throughout training.
📝 Abstract
As the scale of models and training data continues to grow, there is an expanding reliance on more GPUs to train large-scale models, which inevitably increases the likelihood of encountering dynamic stragglers that some devices lag behind in performance occasionally. However, hybrid parallel training, one of the de facto paradigms to train large models, is typically sensitive to the stragglers. This paper presents Malleus, a straggler-resilient hybrid parallel training framework for large-scale models. Malleus quantifies the stragglers at the nuanced, per-GPU granularity during training, and develops a novel planning algorithm to deduce the optimal parallelization of GPU devices, pipeline stages, model layers, and training data, maximizing training efficiency when stragglers exist. In addition, once a shift in the straggler situation is detected, Malleus adaptively adjusts the parallelization via a re-planning process, and seamlessly and efficiently migrates the model states on the fly, without sacrificing the stability of the training tasks. Empirical results on large language models with up to 110B parameters show that Malleus consistently outperforms existing parallel training frameworks under various straggler situations, delivering on average 2.63-5.28 times of efficiency improvement.