🤖 AI Summary
Muon optimizers rely on approximate orthogonalization (e.g., Newton–Schulz iteration) for efficient training, but their theoretical analysis has long assumed infeasible exact SVD updates, creating a gap between theory and practice.
Method: We establish the first convergence theory for Muon under *inexact* orthogonalization, formulated within the linear minimization oracle (LMO) framework. We introduce an additive error model to quantify the interplay among orthogonalization approximation error, learning rate, and momentum parameters.
Contribution/Results: We derive an explicit convergence bound that degrades gracefully with LMO error, proving that orthogonalization accuracy is not an isolated implementation detail but must be jointly tuned with hyperparameters. Experiments on NanoGPT confirm our theory: increasing orthogonalization error induces significant shifts in the optimal learning rate—validating the predicted coupling between approximation fidelity and hyperparameter selection.
📝 Abstract
The Muon optimizer has rapidly emerged as a powerful, geometry-aware alternative to AdamW, demonstrating strong performance in large-scale training of neural networks. However, a critical theory-practice disconnect exists: Muon's efficiency relies on fast, approximate orthogonalization, yet all prior theoretical work analyzes an idealized, computationally intractable version assuming exact SVD-based updates. This work moves beyond the ideal by providing the first analysis of the inexact orthogonalized update at Muon's core. We develop our analysis within the general framework of Linear Minimization Oracle (LMO)-based optimization, introducing a realistic additive error model to capture the inexactness of practical approximation schemes. Our analysis yields explicit bounds that quantify performance degradation as a function of the LMO inexactness/error. We reveal a fundamental coupling between this inexactness and the optimal step size and momentum: lower oracle precision requires a smaller step size but larger momentum parameter. These findings elevate the approximation procedure (e.g., the number of Newton-Schulz steps) from an implementation detail to a critical parameter that must be co-tuned with the learning schedule. NanoGPT experiments directly confirm the predicted coupling, with optimal learning rates clearly shifting as approximation precision changes.