🤖 AI Summary
This work addresses decentralized matrix optimization over graph topologies. We propose DeMuon, the first decentralized extension of the centralized Muon framework. DeMuon integrates Newton–Schulz iterations for efficient matrix orthogonalization and couples gradient tracking to handle objective heterogeneity across agents and heavy-tailed noise. Theoretically, DeMuon establishes the first provably convergent decentralized algorithm for this problem, achieving a convergence complexity that matches the centralized optimal rate in terms of target accuracy. Empirically, we evaluate DeMuon on decentralized Transformer pretraining across diverse graph connectivity structures, where it significantly outperforms existing decentralized optimization methods.
📝 Abstract
In this paper, we propose DeMuon, a method for decentralized matrix optimization over a given communication topology. DeMuon incorporates matrix orthogonalization via Newton-Schulz iterations-a technique inherited from its centralized predecessor, Muon-and employs gradient tracking to mitigate heterogeneity among local functions. Under heavy-tailed noise conditions and additional mild assumptions, we establish the iteration complexity of DeMuon for reaching an approximate stochastic stationary point. This complexity result matches the best-known complexity bounds of centralized algorithms in terms of dependence on the target tolerance. To the best of our knowledge, DeMuon is the first direct extension of Muon to decentralized optimization over graphs with provable complexity guarantees. We conduct preliminary numerical experiments on decentralized transformer pretraining over graphs with varying degrees of connectivity. Our numerical results demonstrate a clear margin of improvement of DeMuon over other popular decentralized algorithms across different network topologies.