🤖 AI Summary
In distributed training, attention models are frequently impaired by heavy-tailed noise, leading to unbounded gradient variance and unstable convergence. To address this, we propose TailOPT—a novel optimization framework that establishes the first convergence guarantees for distributed optimization with local updates (Local SGD) under heavy-tailed stochastic gradients. Our key contributions are: (1) Bi²Clip, a coordinate-wise doubly adaptive gradient clipping scheme that achieves Adam-level performance without additional statistical overhead; and (2) an integrated design combining gradient compression with a nested optimization structure to simultaneously enhance robustness and communication efficiency. Extensive experiments on multiple language modeling benchmarks demonstrate that TailOPT significantly outperforms state-of-the-art methods in both accuracy and convergence stability—particularly under unbounded gradient variance—while reducing communication and memory overhead.
📝 Abstract
Distributed optimization has become the default training paradigm in modern machine learning due to the growing scale of models and datasets. To mitigate communication overhead, local updates are often applied before global aggregation, resulting in a nested optimization approach with inner and outer steps. However, heavy-tailed stochastic gradient noise remains a significant challenge, particularly in attention-based models, hindering effective training. In this work, we propose TailOPT, an efficient framework designed to address heavy-tailed noise by leveraging adaptive optimization or clipping techniques. We establish convergence guarantees for the TailOPT framework under heavy-tailed noise with potentially unbounded gradient variance and local updates. Among its variants, we highlight a memory and communication efficient instantiation which we call $Bi^2Clip$, which performs coordinate-wise clipping at both the inner and outer optimizers, achieving adaptive-like performance (e.g., Adam) without the cost of maintaining or transmitting additional gradient statistics. Empirically, TailOPT, including $Bi^2Clip$, demonstrates superior performance on several language tasks and models, outperforming state-of-the-art methods.