Taming Momentum: Rethinking Optimizer States Through Low-Rank Approximation

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory overhead imposed by modern optimizers—such as Adam and Muon—during large language model training, which stems from their use of first- and second-order momentum and limits scalability. The paper presents the first equivalence between exponential moving average (EMA) momentum mechanisms and online linear regression, leveraging this insight to introduce LoRA-Pre, a low-rank optimizer designed for efficient pretraining. By compressing momentum matrices into compact low-rank subspaces, LoRA-Pre substantially reduces memory consumption while maintaining or even enhancing optimization performance. Experiments demonstrate that LoRA-Pre surpasses baseline optimizers in pretraining Llama-family models (60M–1B parameters) using only 1/8 of the original rank, and achieves gains of 6.17 and 3.14 points on Llama-2-7B and Llama-3.1-8B during fine-tuning, respectively.

Technology Category

Application Category

📝 Abstract
Modern optimizers like Adam and Muon are central to training large language models, but their reliance on first- and second-order momenta introduces significant memory overhead, which constrains scalability and computational efficiency. In this work, we reframe the exponential moving average (EMA) used in these momenta as the training of a linear regressor via online gradient flow. Building on this equivalence, we introduce LoRA-Pre, a novel low-rank optimizer designed for efficient pre-training. Specifically, LoRA-Pre reduces the optimizer's memory footprint by decomposing the full momentum matrix into a compact low-rank subspace within the online linear learner, thereby maintaining optimization performance while improving memory efficiency. We empirically validate LoRA-Pre's efficacy by pre-training models from the Llama architecture family, scaling from 60M to 1B parameters. LoRA-Pre achieves the highest performance across all model sizes. Notably, LoRA-Pre demonstrates remarkable rank efficiency, achieving comparable or superior results using only 1/8 the rank of baseline methods. Beyond pre-training, we evaluate LoRA-Pre's effectiveness in fine-tuning scenarios. With the same rank, LoRA-Pre consistently outperforms all efficient fine-tuning baselines. Specifically, compared to standard LoRA, LoRA-Pre achieves substantial improvements of 3.14 points on Llama-3.1-8B and 6.17 points on Llama-2-7B, validating our approach's effectiveness across both pre-training and fine-tuning paradigms. Our code is publicly available at https://github.com/mrflogs/LoRA-Pre.
Problem

Research questions and friction points this paper is trying to address.

optimizer memory overhead
large language models
momentum
scalability
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

low-rank optimization
optimizer memory efficiency
exponential moving average
online linear regression
LoRA-Pre
🔎 Similar Papers
No similar papers found.