DiLoCoX: A Low-Communication Large-Scale Training Framework for Decentralized Cluster

πŸ“… 2025-06-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the communication bottleneck hindering training of >100B-parameter large language models on decentralized clusters with only 1 Gbps interconnects, this paper proposes the first decentralized distributed training framework specifically designed for models exceeding 100 billion parameters. Methodologically, it integrates pipeline parallelism with a dual-optimizer strategy, introduces a novel β€œone-step-delay” communication-computation overlap mechanism, and devises an adaptive gradient compression algorithm with theoretically guaranteed convergence. Experiments demonstrate successful pretraining of a 107B-parameter model on 1 Gbps networks; compared to conventional AllReduce-based baselines, the framework achieves a 357Γ— speedup while preserving near-identical convergence accuracy. This work substantially reduces reliance on high-bandwidth interconnect hardware, enabling resource-efficient, democratized large-model training in infrastructure-constrained environments.

Technology Category

Application Category

πŸ“ Abstract
The distributed training of foundation models, particularly large language models (LLMs), demands a high level of communication. Consequently, it is highly dependent on a centralized cluster with fast and reliable interconnects. Can we conduct training on slow networks and thereby unleash the power of decentralized clusters when dealing with models exceeding 100 billion parameters? In this paper, we propose DiLoCoX, a low-communication large-scale decentralized cluster training framework. It combines Pipeline Parallelism with Dual Optimizer Policy, One-Step-Delay Overlap of Communication and Local Training, and an Adaptive Gradient Compression Scheme. This combination significantly improves the scale of parameters and the speed of model pre-training. We justify the benefits of one-step-delay overlap of communication and local training, as well as the adaptive gradient compression scheme, through a theoretical analysis of convergence. Empirically, we demonstrate that DiLoCoX is capable of pre-training a 107B foundation model over a 1Gbps network. Compared to vanilla AllReduce, DiLoCoX can achieve a 357x speedup in distributed training while maintaining negligible degradation in model convergence. To the best of our knowledge, this is the first decentralized training framework successfully applied to models with over 100 billion parameters.
Problem

Research questions and friction points this paper is trying to address.

Enables large-scale decentralized training on slow networks
Reduces communication overhead in distributed model training
Supports training models with over 100 billion parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pipeline Parallelism with Dual Optimizer Policy
One-Step-Delay Overlap of Communication and Training
Adaptive Gradient Compression Scheme
πŸ”Ž Similar Papers
No similar papers found.
J
Ji Qi
China Mobile(Suzhou) Software Technology, JiangSu, China
W
WenPeng Zhu
China Mobile(Suzhou) Software Technology, JiangSu, China
L
Li Li
China Mobile(Suzhou) Software Technology, JiangSu, China
M
Ming Wu
Zero Gravity Labs
Y
YingJun Wu
China Mobile(Suzhou) Software Technology, JiangSu, China
W
Wu He
China Mobile(Suzhou) Software Technology, JiangSu, China
Xun Gao
Xun Gao
JILA and Physics Department at CU Boulder
quantum information theory
J
Jason Zeng
Zero Gravity Labs
M
Michael Heinrich
Zero Gravity Labs