🤖 AI Summary
To address the high computational cost and poor stability of Shampoo—and the lack of higher-order preconditioning in Adam—during large-model pretraining, this paper proposes SOAP: an optimizer that dynamically updates Adam-style second-moment estimates within the eigenbasis of Shampoo’s preconditioning matrices, thereby unifying higher-order preconditioning with adaptive learning rates. Theoretically, we establish, for the first time, the equivalence between Shampoo and Adafactor. Methodologically, SOAP introduces only one hyperparameter—the preconditioning frequency—and maintains second-moment updates in a rotating coordinate system, ensuring both convergence and stability. Experiments on large-batch language model pretraining show that SOAP reduces iteration count by over 40% and wall-clock time by over 35% compared to AdamW; relative to Shampoo, it improves both metrics by approximately 20%.
📝 Abstract
There is growing evidence of the effectiveness of Shampoo, a higher-order preconditioning method, over Adam in deep learning optimization tasks. However, Shampoo's drawbacks include additional hyperparameters and computational overhead when compared to Adam, which only updates running averages of first- and second-moment quantities. This work establishes a formal connection between Shampoo (implemented with the 1/2 power) and Adafactor -- a memory-efficient approximation of Adam -- showing that Shampoo is equivalent to running Adafactor in the eigenbasis of Shampoo's preconditioner. This insight leads to the design of a simpler and computationally efficient algorithm: $ extbf{S}$hampo$ extbf{O}$ with $ extbf{A}$dam in the $ extbf{P}$reconditioner's eigenbasis (SOAP). With regards to improving Shampoo's computational efficiency, the most straightforward approach would be to simply compute Shampoo's eigendecomposition less frequently. Unfortunately, as our empirical results show, this leads to performance degradation that worsens with this frequency. SOAP mitigates this degradation by continually updating the running average of the second moment, just as Adam does, but in the current (slowly changing) coordinate basis. Furthermore, since SOAP is equivalent to running Adam in a rotated space, it introduces only one additional hyperparameter (the preconditioning frequency) compared to Adam. We empirically evaluate SOAP on language model pre-training with 360m and 660m sized models. In the large batch regime, SOAP reduces the number of iterations by over 40% and wall clock time by over 35% compared to AdamW, with approximately 20% improvements in both metrics compared to Shampoo. An implementation of SOAP is available at https://github.com/nikhilvyas/SOAP.