🤖 AI Summary
Standard optimizers like Adam struggle with long-tailed data distributions in large language model (LLM) training, leading to poor memorization of tail classes. Method: This work theoretically analyzes Muon’s update rule, revealing its intrinsic alignment with the outer-product structure of linear associative memory—particularly suited to the memory-carrying Value–Output attention weights and FFN parameters in Transformers. Combining singular spectrum analysis with a single-layer associative memory model, we demonstrate that Muon mitigates class imbalance by equalizing effective learning rates across classes. Results: Experiments on real-world corpora show Muon significantly improves tail-class memorization, with robustness across diverse feature embeddings. Crucially, this study provides the first interpretable theoretical framework explaining Muon’s long-tail optimization advantage from an associative memory perspective—establishing a novel mechanistic understanding of optimizer behavior in LLM training under data imbalance.
📝 Abstract
The Muon optimizer is consistently faster than Adam in training Large Language Models (LLMs), yet the mechanism underlying its success remains unclear. This paper demystifies this mechanism through the lens of associative memory. By ablating the transformer components optimized by Muon, we reveal that the associative memory parameters of LLMs, namely the Value and Output (VO) attention weights and Feed-Forward Networks (FFNs), are the primary contributors to Muon's superiority. Motivated by this associative memory view, we then explain Muon's superiority on real-world corpora, which are intrinsically heavy-tailed: a few classes (tail classes) appear far less frequently than others. The superiority is explained through two key properties: (i) its update rule consistently yields a more isotropic singular spectrum than Adam; and as a result, (ii) on heavy-tailed data, it optimizes tail classes more effectively than Adam. Beyond empirical evidence, we theoretically confirm these findings by analyzing a one-layer associative memory model under class-imbalanced data. We prove that Muon consistently achieves balanced learning across classes regardless of feature embeddings, whereas Adam can induce large disparities in learning errors depending on embedding properties. In summary, our empirical observations and theoretical analyses reveal Muon's core advantage: its update rule aligns with the outer-product structure of linear associative memories, enabling more balanced and effective learning of tail classes in heavy-tailed distributions than Adam.