Sign-Based Optimizers Are Effective Under Heavy-Tailed Noise

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing theoretical frameworks struggle to explain the empirical superiority of sign-based optimizers—such as SignSGD, Lion, and Muon—over adaptive gradient methods in large language model training, particularly under heavy-tailed noise conditions where rigorous analysis is lacking. This work addresses this gap by introducing a more realistic generalized heavy-tailed noise model coupled with a class of generalized smooth functions. Within this framework, we establish the first convergence theory for matrix sign optimizers like Muon under non-standard noise, deriving sharp convergence rates. Our theoretical analysis demonstrates that sign-based optimizers achieve superior convergence guarantees in such noisy regimes, and extensive experiments confirm both their effectiveness in large language model pretraining and the practical relevance of the proposed noise model.

Technology Category

Application Category

📝 Abstract
While adaptive gradient methods are the workhorse of modern machine learning, sign-based optimization algorithms such as Lion and Muon have recently demonstrated superior empirical performance over AdamW in training large language models (LLM). However, a theoretical understanding of why sign-based updates outperform variance-adapted methods remains elusive. In this paper, we aim to bridge the gap between theory and practice through the lens of heavy-tailed gradient noise, a phenomenon frequently observed in language modeling tasks. Theoretically, we introduce a novel generalized heavy-tailed noise condition that captures the behavior of LLMs more accurately than standard finite variance assumptions. Under this noise model, we establish sharp convergence rates of SignSGD and Lion for generalized smooth function classes, matching or surpassing previous best-known bounds. Furthermore, we extend our analysis to Muon and Muonlight, providing what is, to our knowledge, the first rigorous analysis of matrix optimization under heavy-tailed stochasticity. These results offer a strong theoretical justification for the empirical superiority of sign-based optimizers, showcasing that they are naturally suited to handle the noisy gradients associated with heavy tails. Empirically, LLM pretraining experiments validate our theoretical insights and confirm that our proposed noise models are well-aligned with practice.
Problem

Research questions and friction points this paper is trying to address.

sign-based optimizers
heavy-tailed noise
large language models
gradient noise
optimization theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

sign-based optimization
heavy-tailed noise
convergence analysis
large language models
stochastic optimization
🔎 Similar Papers
No similar papers found.