🤖 AI Summary
Manual design of parallelization strategies for distributed large language model (LLM) training is inefficient and incurs high search costs. Method: This paper proposes a hardware-aware automatic parallelization optimization framework. Its core innovation is the first fine-grained, decoupled runtime latency simulation model that explicitly captures computation, communication, and their overlap phases. Leveraging this model, the framework systematically prunes the strategy space while guaranteeing global optimality, enabling millisecond-scale strategy recommendation. The method integrates multi-dimensional hardware-aware modeling, real-time optimization search, and strategy-space reduction. Results: Multi-node experiments demonstrate 96% latency prediction accuracy; the recommended strategies achieve global optimality across all evaluated scenarios, and strategy search time is significantly reduced.
📝 Abstract
The number of parameters in large-scale language models based on transformers is gradually increasing, and the scale of computing clusters is also growing. The technology of quickly mobilizing large amounts of computing resources for parallel computing is becoming increasingly important. In this paper, we propose an automatic parallel algorithm that automatically plans the parallel strategy with maximum throughput based on model and hardware information. By decoupling the training time into computation, communication, and overlap, we established a training duration simulation model. Based on this simulation model, we prune the parallel solution space to shorten the search time required. The multi-node experiment results show that the algorithm can estimate the parallel training duration in real time with an average accuracy of 96%. In our test, the recommendation strategy provided by the algorithm is always globally optimal.