π€ AI Summary
To address slow convergence in large language model (LLM) pretraining, this paper proposes GradPowerβa lightweight gradient transformation technique that applies a sign-power transformation to gradients before feeding them into standard optimizers. GradPower requires zero hyperparameter tuning and integrates via a single line of code. It is the first method to significantly accelerate training convergence under warmup-stable-decay learning rate schedules in Mixture-of-Experts (MoE) architectures while reducing final loss. Theoretically, we establish a noise-robustness analysis framework for the transformation. Empirically, GradPower consistently improves performance across LLaMA and Qwen2MoE models (66Mβ2B parameters) trained on C4 and OpenWebText, and is compatible with mainstream optimizers including Adam and Muon. It simultaneously enhances both training speed and final model quality for MoE-based LLMs.
π Abstract
We propose GradPower, a lightweight gradient-transformation technique for accelerating language model pre-training. Given a gradient vector $g=(g_i)_i$, GradPower first applies the elementwise sign-power transformation: $varphi_p(g)=({
m sign}(g_i)|g_i|^p)_{i}$ for a fixed $p>0$, and then feeds the transformed gradient into a base optimizer. Notably, GradPower requires only a single-line code change and no modifications to the base optimizer's internal logic, including the hyperparameters. When applied to Adam (termed AdamPower), GradPower consistently achieves lower terminal loss across diverse architectures (LLaMA, Qwen2MoE), parameter scales (66M to 2B), datasets (C4, OpenWebText), and learning-rate schedules (cosine, warmup-stable-decay). The most pronounced gains are observed when training modern mixture-of-experts models with warmup-stable-decay schedules. GradPower also integrates seamlessly with other state-of-the-art optimizers, such as Muon, yielding further improvements. Finally, we provide theoretical analyses that reveal the underlying mechanism of GradPower and highlights the influence of gradient noise.