Automatically Planning Optimal Parallel Strategy for Large Language Models

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual design of parallelization strategies for distributed large language model (LLM) training is inefficient and incurs high search costs. Method: This paper proposes a hardware-aware automatic parallelization optimization framework. Its core innovation is the first fine-grained, decoupled runtime latency simulation model that explicitly captures computation, communication, and their overlap phases. Leveraging this model, the framework systematically prunes the strategy space while guaranteeing global optimality, enabling millisecond-scale strategy recommendation. The method integrates multi-dimensional hardware-aware modeling, real-time optimization search, and strategy-space reduction. Results: Multi-node experiments demonstrate 96% latency prediction accuracy; the recommended strategies achieve global optimality across all evaluated scenarios, and strategy search time is significantly reduced.

Technology Category

Application Category

📝 Abstract
The number of parameters in large-scale language models based on transformers is gradually increasing, and the scale of computing clusters is also growing. The technology of quickly mobilizing large amounts of computing resources for parallel computing is becoming increasingly important. In this paper, we propose an automatic parallel algorithm that automatically plans the parallel strategy with maximum throughput based on model and hardware information. By decoupling the training time into computation, communication, and overlap, we established a training duration simulation model. Based on this simulation model, we prune the parallel solution space to shorten the search time required. The multi-node experiment results show that the algorithm can estimate the parallel training duration in real time with an average accuracy of 96%. In our test, the recommendation strategy provided by the algorithm is always globally optimal.
Problem

Research questions and friction points this paper is trying to address.

Large Language Model Training
Optimal Parallel Computing Strategy
Efficient Computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Parallel Strategy
Time Prediction Model
Resource Efficiency
🔎 Similar Papers
No similar papers found.
Z
Zongbiao Li
HUAWEI
X
Xiezhao Li
HUAWEI
Y
Yinghao Cui
HUAWEI
Y
Yijun Chen
HUAWEI
Z
Zhixuan Gu
HUAWEI
Y
Yuxuan Liu
HUAWEI
W
Wenbo Zhu
HUAWEI
Fei Jia
Fei Jia
HUAWEI
K
Ke Liu
HUAWEI
Qifeng Li
Qifeng Li
University of Central Florida
Convex OptimizationNonlinear SystemsElectrical and Energy SystemsSmart grid technologies
J
Junyao Zhan
HUAWEI
J
Jiangtao Zhou
HUAWEI
C
Chenxi Zhang
HUAWEI
Q
Qike Liu
HUAWEI