🤖 AI Summary
To address the low training efficiency and poor scalability of large language models (LLMs) on supercomputing clusters, this paper proposes AsyncHZP—a high-efficiency parallel training framework based on hierarchical parameter partitioning and asynchronous scheduling. Its core innovations include an adaptive re-sharding strategy and a multi-stream asynchronous execution mechanism that overlaps All-Gather and Reduce-Scatter communications with computation in background threads, significantly mitigating communication overhead induced by fine-grained sharding. Furthermore, hierarchical replica group management and low-memory-fragmentation scheduling jointly ensure simplicity, high memory utilization, and strong scalability. Experiments demonstrate that AsyncHZP achieves stable convergence for both dense and Mixture-of-Experts (MoE) architectures, substantially outperforming conventional N-dimensional (ND) parallelism in training throughput and scaling efficiency—without requiring intricate hyperparameter tuning—thereby attaining state-of-the-art performance.
📝 Abstract
The training efficiency and scalability of language models on massive clusters currently remain a critical bottleneck. Mainstream approaches like ND parallelism are often cumbersome and complex, while flexible alternatives such as the Zero Redundancy Optimizer (ZeRO) are frequently hampered by communication overhead. In this paper, we propose Asynchronous Hierarchical Zero Parallelism (AsyncHZP), a novel asynchronous variant of ZeRO designed to achieve superior performance while maintaining simplicity and memory efficiency. Unlike traditional ZeRO, which employs over-fine-grained sharding that can lead to inefficient communication, AsyncHZP adaptively reshards parameters, gradients, and optimizer states across different replica groups. This strategy optimizes device memory utilization and significantly reduces communication overhead. In addition, we also design a multi-stream asynchronous scheduling method that executes parameter all-gather and gradient reduce-scatter operations in dedicated background threads, effectively overlapping communication with computation while incurring negligible memory fragmentation. Empirical evaluations on both Dense and Mixture-of-Experts (MoE) models confirm that AsyncHZP maintains robust stability at scale. It consistently outperforms classic ND parallelism, achieving state-of-the-art performance without complex strategic tuning, thereby simplifying the path to efficient large-scale training.