🤖 AI Summary
Conventional optimizers treat neural network weight matrices as unstructured vectors, ignoring their inherent structural properties—such as low-rankness and approximate block-diagonal Hessian structure—leading to suboptimal convergence. Although the recently proposed Muon optimizer demonstrates empirically superior performance, its theoretical foundations have remained elusive. Method: We develop the first rigorous convergence analysis framework for Muon by integrating matrix calculus, nonconvex optimization theory, and structured Hessian analysis. Contribution/Results: We prove that Muon’s acceleration stems precisely from its effective exploitation of the Hessian’s low-rank and near block-diagonal structure, yielding strictly improved convergence rates for standard neural network training settings. Extensive experiments validate our theoretical predictions, showing strong agreement between derived bounds and empirical behavior. This work bridges a critical theoretical gap in structure-aware matrix optimization and establishes a new geometrically informed paradigm for designing adaptive optimizers.
📝 Abstract
The majority of parameters in neural networks are naturally represented as matrices. However, most commonly used optimizers treat these matrix parameters as flattened vectors during optimization, potentially overlooking their inherent structural properties. Recently, an optimizer called Muon has been proposed, specifically designed to optimize matrix-structured parameters. Extensive empirical evidence shows that Muon can significantly outperform traditional optimizers when training neural networks. Nonetheless, the theoretical understanding of Muon's convergence behavior and the reasons behind its superior performance remain limited. In this work, we present a comprehensive convergence rate analysis of Muon and its comparison with Gradient Descent (GD). We further characterize the conditions under which Muon can outperform GD. Our theoretical results reveal that Muon can benefit from the low-rank and approximate blockwise diagonal structure of Hessian matrices -- phenomena widely observed in practical neural network training. Our experimental results support and corroborate the theoretical findings.