Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Communication bottlenecks severely hinder efficient distributed training of large language models (LLMs) across low-bandwidth networks. Method: This paper proposes a synergistic mechanism integrating streaming chunked parameter synchronization, computation-communication overlap, and INT8 gradient quantization. It introduces time-sliced streaming parameter updates and asynchronous pipelined scheduling, unified within the DiLoCo framework via chunked AllReduce and overlapped communication-computation. Contribution/Results: The approach eliminates reliance on high-bandwidth co-located hardware. For billion-parameter model training, it achieves convergence quality comparable to full-bandwidth baselines while significantly accelerating end-to-end training. Peak inter-worker communication bandwidth is reduced by two orders of magnitude (100×), with negligible impact on model quality.

Technology Category

Application Category

📝 Abstract
Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time. Since internal states and parameter gradients need to be exchanged at each and every single gradient step, all devices need to be co-located using low-latency high-bandwidth communication links to support the required high volume of exchanged bits. Recently, distributed algorithms like DiLoCo have relaxed such co-location constraint: accelerators can be grouped into ``workers'', where synchronizations between workers only occur infrequently. This in turn means that workers can afford being connected by lower bandwidth communication links without affecting learning quality. However, in these methods, communication across workers still requires the same peak bandwidth as before, as the synchronizations require all parameters to be exchanged across all workers. In this paper, we improve DiLoCo in three ways. First, we synchronize only subsets of parameters in sequence, rather than all at once, which greatly reduces peak bandwidth. Second, we allow workers to continue training while synchronizing, which decreases wall clock time. Third, we quantize the data exchanged by workers, which further reduces bandwidth across workers. By properly combining these modifications, we show experimentally that we can distribute training of billion-scale parameters and reach similar quality as before, but reducing required bandwidth by two orders of magnitude.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Reduced Network Communication
Training Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient Distributed Training
Network Resource Optimization
Communication-Computation Overlap
🔎 Similar Papers