AsyncHZP: Hierarchical ZeRO Parallelism with Asynchronous Scheduling for Scalable LLM Training

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training efficiency and poor scalability of large language models (LLMs) on supercomputing clusters, this paper proposes AsyncHZP—a high-efficiency parallel training framework based on hierarchical parameter partitioning and asynchronous scheduling. Its core innovations include an adaptive re-sharding strategy and a multi-stream asynchronous execution mechanism that overlaps All-Gather and Reduce-Scatter communications with computation in background threads, significantly mitigating communication overhead induced by fine-grained sharding. Furthermore, hierarchical replica group management and low-memory-fragmentation scheduling jointly ensure simplicity, high memory utilization, and strong scalability. Experiments demonstrate that AsyncHZP achieves stable convergence for both dense and Mixture-of-Experts (MoE) architectures, substantially outperforming conventional N-dimensional (ND) parallelism in training throughput and scaling efficiency—without requiring intricate hyperparameter tuning—thereby attaining state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
The training efficiency and scalability of language models on massive clusters currently remain a critical bottleneck. Mainstream approaches like ND parallelism are often cumbersome and complex, while flexible alternatives such as the Zero Redundancy Optimizer (ZeRO) are frequently hampered by communication overhead. In this paper, we propose Asynchronous Hierarchical Zero Parallelism (AsyncHZP), a novel asynchronous variant of ZeRO designed to achieve superior performance while maintaining simplicity and memory efficiency. Unlike traditional ZeRO, which employs over-fine-grained sharding that can lead to inefficient communication, AsyncHZP adaptively reshards parameters, gradients, and optimizer states across different replica groups. This strategy optimizes device memory utilization and significantly reduces communication overhead. In addition, we also design a multi-stream asynchronous scheduling method that executes parameter all-gather and gradient reduce-scatter operations in dedicated background threads, effectively overlapping communication with computation while incurring negligible memory fragmentation. Empirical evaluations on both Dense and Mixture-of-Experts (MoE) models confirm that AsyncHZP maintains robust stability at scale. It consistently outperforms classic ND parallelism, achieving state-of-the-art performance without complex strategic tuning, thereby simplifying the path to efficient large-scale training.
Problem

Research questions and friction points this paper is trying to address.

Optimizing communication overhead in large-scale language model training
Improving memory efficiency through adaptive parameter and gradient sharding
Overlapping communication with computation via asynchronous scheduling methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asynchronous ZeRO variant for scalable LLM training
Adaptive parameter resharding across replica groups
Multi-stream scheduling overlaps communication with computation
🔎 Similar Papers
No similar papers found.
H
Huawei Bai
ByteDance Seed
Y
Yifan Huang
ByteDance Seed
Wenqi Shi
Wenqi Shi
Assistant Professor, University of Texas Southwestern Medical Center
AI for HealthcareLLM AgentClinical Decision SupportClinical Informatics
A
Ansheng You
ByteDance Seed
F
Feifan Shao
ByteDance Seed
T
Tengfei Han
ByteDance Seed
M
Minghui Yu
ByteDance Seed