🤖 AI Summary
This work addresses the geometric incompatibility of gradient descent in neural network optimization by proposing a non-Euclidean gradient descent framework. It unifies cross-layer norm selection and aggregation mechanisms, formally defining the optimizer family Muon and its variants; introduces learning-rate normalization and Model Momentum (Momo) to enhance robustness; and, for the first time, incorporates Adam into this framework, yielding the improved MuonMax. Experiments demonstrate that MuonMax+Momo achieves superior generalization on unseen tasks, significantly reduces hyperparameter sensitivity—cutting tuning costs by over 40% on average—and consistently outperforms Adam and SGD across multi-task benchmarks. The core contribution is the establishment of the first systematic, theoretically grounded non-Euclidean optimization framework for gradient descent, accompanied by a novel optimizer family that bridges rigorous mathematical foundations with practical engineering efficacy.
📝 Abstract
To define a steepest descent method over a neural network, we need to choose a norm for each layer, a way to aggregate these norms across layers, and whether to use normalization. We systematically explore different alternatives for aggregating norms across layers, both formalizing existing combinations of Adam and the recently proposed Muon as a type of non-Euclidean gradient descent, and deriving new variants of the Muon optimizer. Through a comprehensive experimental evaluation of the optimizers within our framework, we find that Muon is sensitive to the choice of learning rate, whereas a new variant we call MuonMax is significantly more robust. We then show how to combine any non-Euclidean gradient method with model based momentum (known as Momo). The new Momo variants of Muon are significantly more robust to hyperparameter tuning, and often achieve a better validation score. Thus for new tasks, where the optimal hyperparameters are not known, we advocate for using Momo in combination with MuonMax to save on costly hyperparameter tuning.