🤖 AI Summary
Existing low-rank gradient projection methods (e.g., GaLore) introduce inherent bias in large language model (LLM) training, leading to convergence failure and degraded performance. To address this, we propose an inter-layer unbiased sampling optimization framework: we design the first layer-wise random matrix sampling strategy that rigorously eliminates gradient bias in low-rank projections. We theoretically prove that our method restores the convergence guarantees of base optimizers (e.g., AdamW) while preserving GaLore-level memory efficiency. By integrating low-rank compression, adaptive optimization, and unbiased sampling, our approach enables efficient, unbiased parameter updates within the Muon/GaLore architecture. Experiments demonstrate that our method significantly outperforms GaLore in both LLM pretraining and fine-tuning—exceeding full-parameter training in accuracy—while enhancing intra-layer knowledge distribution uniformity and long-term memory retention.
📝 Abstract
Memory-efficient optimization is critical for training increasingly large language models (LLMs). A popular strategy involves gradient low-rank projection, storing only the projected optimizer states, with GaLore being a representative example. However, a significant drawback of many such methods is their lack of convergence guarantees, as various low-rank projection approaches introduce inherent biases relative to the original optimization algorithms, which contribute to performance gaps compared to full-parameter training. Aiming to tackle this problem, this paper investigates the layerwise sampling technique for debiasing low-rank projection mechanisms. In particular, an instantiation of the paradigm gives rise to a novel and unbiased low-rank optimization method built upon GaLore's mechanism and the Muon algorithm, named GaLore Unbiased with Muon (GUM). We theoretically prove our method matches the convergence guarantees of the base Muon algorithm while preserving the memory efficiency of low-rank techniques. Empirical experiments on LLM fine-tuning and pretraining also demonstrate non-trivial improvements over GaLore and even better performance than full-parameter training. Further investigation shows that the improvement of this technique comes from a more uniform distribution of knowledge inside layers, leading to more efficient utilization of the model parameter space and better memorization.