FedMuon: Federated Learning with Bias-corrected LMO-based Optimization

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, directly applying the Muon optimizer—based on a linear minimization oracle (LMO)—within FedAvg fails to converge to stationary points due to the inherent bias of the LMO. This work introduces Muon to federated learning for the first time, systematically analyzing how LMO bias undermines global convergence. We propose FedMuon, a novel framework that (i) corrects LMO-induced bias via a dedicated debiasing technique and (ii) efficiently approximates the LMO using Newton–Schulz iterations—guaranteeing convergence for any number of iterations, with higher approximation accuracy accelerating overall convergence. Experiments across heterogeneous data distributions, communication-constrained settings, and varying model scales demonstrate that FedMuon consistently outperforms state-of-the-art methods, achieving both faster convergence and superior training efficiency.

Technology Category

Application Category

📝 Abstract
Recently, a new optimization method based on the linear minimization oracle (LMO), called Muon, has been attracting increasing attention since it can train neural networks faster than existing adaptive optimization methods, such as Adam. In this paper, we study how Muon can be utilized in federated learning. We first show that straightforwardly using Muon as the local optimizer of FedAvg does not converge to the stationary point since the LMO is a biased operator. We then propose FedMuon which can mitigate this issue. We also analyze how solving the LMO approximately affects the convergence rate and find that, surprisingly, FedMuon can converge for any number of Newton-Schulz iterations, while it can converge faster as we solve the LMO more accurately. Through experiments, we demonstrated that FedMuon can outperform the state-of-the-art federated learning methods.
Problem

Research questions and friction points this paper is trying to address.

Correcting biased LMO optimization in federated learning settings
Enabling Muon optimizer convergence for distributed neural network training
Improving federated learning efficiency with approximate LMO solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with bias-corrected LMO optimization
Mitigating bias in linear minimization oracle operator
Convergence with approximate LMO via Newton-Schulz iterations