🤖 AI Summary
DiLoCo reduces communication overhead but suffers from blocking delays due to its outer-loop synchronization—especially pronounced in cross-data-center settings with limited bandwidth, significantly slowing down training. Method: We propose *eager updates*, a mechanism that fully overlaps outer-loop parameter updates with inner-loop local optimization in DiLoCo. This is achieved via asynchronous communication scheduling and explicit compute-communication overlap, eliminating waiting time for outer-loop synchronization. Contribution/Results: To the best of our knowledge, this is the first approach enabling fully asynchronous outer-loop execution in DiLoCo. It preserves convergence behavior comparable to standard DiLoCo while substantially reducing end-to-end training latency—particularly beneficial for bandwidth-constrained distributed large-model training.
📝 Abstract
Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their own local data, and an outer optimization step, where the inner updates are synchronized. While such approaches require orders of magnitude less communication than standard data-parallel training, in settings where the workers are datacenters, even the limited communication requirements of these approaches can still cause significant slow downs due to the blocking necessary at each outer optimization step. In this paper, we investigate techniques to mitigate this issue by overlapping communication with computation in a manner that allows the outer optimization step to fully overlap with the inner optimization phase. We show that a particular variant, dubbed eager updates, provides competitive performance with standard DiLoCo in settings with low bandwidth between workers.