DES-LOC: Desynced Low Communication Adaptive Optimizers for Training Foundation Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In distributed training, DDP is bandwidth-limited, while low-communication methods like Local SGD struggle to accommodate adaptive optimizers (e.g., Adam) due to the need to synchronize not only model parameters but also auxiliary states such as momentum; existing extensions either lack convergence guarantees or incur prohibitive communication overhead. This paper proposes the first decoupled synchronization framework with provable convergence: it assigns distinct synchronization periods to model parameters and momentum states, and introduces a hierarchical synchronization strategy coupled with an adaptive communication scheduling algorithm. Evaluated on a 1.7B-parameter language model, our method reduces communication volume by 170× over DDP and by 2× over Local Adam, without sacrificing convergence speed or final accuracy. The framework further ensures fault tolerance and practical deployability.

Technology Category

Application Category

📝 Abstract
Scaling foundation model training with Distributed Data Parallel (DDP) methods is bandwidth-limited. Existing infrequent communication methods like Local SGD were designed to synchronize only model parameters and cannot be trivially applied to adaptive optimizers due to additional optimizer states. Current approaches extending Local SGD either lack convergence guarantees or require synchronizing all optimizer states, tripling communication costs. We propose Desynced Low Communication Adaptive Optimizers (DES-LOC), a family of optimizers assigning independent synchronization periods to parameters and momenta, enabling lower communication costs while preserving convergence. Through extensive experiments on language models of up to 1.7B, we show that DES-LOC can communicate 170x less than DDP and 2x less than the previous state-of-the-art Local ADAM. Furthermore, unlike previous heuristic approaches, DES-LOC is suited for practical training scenarios prone to system failures. DES-LOC offers a scalable, bandwidth-efficient, and fault-tolerant solution for foundation model training.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication costs in distributed foundation model training
Enabling adaptive optimizers with independent synchronization periods
Providing fault-tolerant training for large-scale language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Independent sync periods for parameters and momenta
Reduces communication costs significantly
Ensures convergence and fault tolerance
🔎 Similar Papers
No similar papers found.