🤖 AI Summary
To address the high memory overhead and poor sample complexity (typically O(ε⁻⁴)) of Muon-type optimizers in large-model training, this paper proposes LiMuon—a lightweight and efficient optimizer. Methodologically, LiMuon is the first to integrate momentum-based variance reduction with randomized SVD, simultaneously reducing Hessian approximation memory cost and achieving improved sample complexity of O(ε⁻³). Theoretically, it establishes the first convergence guarantee for such methods under generalized smoothness conditions, relaxing the restrictive Lipschitz gradient assumption required by prior work. Empirical evaluation on DistilGPT2 and ViT demonstrates that LiMuon significantly outperforms Muon and its variants: it reduces memory consumption by up to 37% and accelerates convergence by 2.1×, while maintaining strong scalability and training efficiency.
📝 Abstract
Large models recently are widely applied in artificial intelligence, so efficient training of large models has received widespread attention. More recently, a useful Muon optimizer is specifically designed for matrix-structured parameters of large models. Although some works have begun to studying Muon optimizer, the existing Muon and its variants still suffer from high sample complexity or high memory for large models. To fill this gap, we propose a light and fast Muon (LiMuon) optimizer for training large models, which builds on the momentum-based variance reduced technique and randomized Singular Value Decomposition (SVD). Our LiMuon optimizer has a lower memory than the current Muon and its variants. Moreover, we prove that our LiMuon has a lower sample complexity of $O(ε^{-3})$ for finding an $ε$-stationary solution of non-convex stochastic optimization under the smooth condition. Recently, the existing convergence analysis of Muon optimizer mainly relies on the strict Lipschitz smooth assumption, while some artificial intelligence tasks such as training large language models (LLMs) do not satisfy this condition. We also proved that our LiMuon optimizer has a sample complexity of $O(ε^{-3})$ under the generalized smooth condition. Numerical experimental results on training DistilGPT2 and ViT models verify efficiency of our LiMuon optimizer.