🤖 AI Summary
Asynchronous stochastic gradient descent (ASGD) with momentum often suffers from slowed convergence or even divergence due to stale gradients. This paper proposes Ordered Momentum (OrMo), a novel momentum scheme that reconstructs the momentum vector by reordering delayed gradients according to their iteration indices—ensuring directional consistency of the momentum estimate. Additionally, OrMo introduces a delay-adaptive learning rate mechanism that dynamically adjusts step sizes without requiring prior knowledge of delay statistics. Crucially, we establish, for the first time, a convergence theory for ASGD with momentum under non-convex settings—where the convergence rate depends only on the *average* delay, not the worst-case (maximum) delay. Experiments across image classification and language modeling tasks demonstrate that OrMo significantly outperforms standard ASGD and existing asynchronous momentum methods, achieving both faster convergence and enhanced training stability.
📝 Abstract
Distributed learning is essential for training large-scale deep models. Asynchronous SGD (ASGD) and its variants are commonly used distributed learning methods, particularly in scenarios where the computing capabilities of workers in the cluster are heterogeneous. Momentum has been acknowledged for its benefits in both optimization and generalization in deep model training. However, existing works have found that naively incorporating momentum into ASGD can impede the convergence. In this paper, we propose a novel method called ordered momentum (OrMo) for ASGD. In OrMo, momentum is incorporated into ASGD by organizing the gradients in order based on their iteration indexes. We theoretically prove the convergence of OrMo with both constant and delay-adaptive learning rates for non-convex problems. To the best of our knowledge, this is the first work to establish the convergence analysis of ASGD with momentum without dependence on the maximum delay. Empirical results demonstrate that OrMo can achieve better convergence performance compared with ASGD and other asynchronous methods with momentum.