GradPower: Powering Gradients for Faster Language Model Pre-Training

πŸ“… 2025-05-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address slow convergence in large language model (LLM) pretraining, this paper proposes GradPowerβ€”a lightweight gradient transformation technique that applies a sign-power transformation to gradients before feeding them into standard optimizers. GradPower requires zero hyperparameter tuning and integrates via a single line of code. It is the first method to significantly accelerate training convergence under warmup-stable-decay learning rate schedules in Mixture-of-Experts (MoE) architectures while reducing final loss. Theoretically, we establish a noise-robustness analysis framework for the transformation. Empirically, GradPower consistently improves performance across LLaMA and Qwen2MoE models (66M–2B parameters) trained on C4 and OpenWebText, and is compatible with mainstream optimizers including Adam and Muon. It simultaneously enhances both training speed and final model quality for MoE-based LLMs.

Technology Category

Application Category

πŸ“ Abstract
We propose GradPower, a lightweight gradient-transformation technique for accelerating language model pre-training. Given a gradient vector $g=(g_i)_i$, GradPower first applies the elementwise sign-power transformation: $varphi_p(g)=({ m sign}(g_i)|g_i|^p)_{i}$ for a fixed $p>0$, and then feeds the transformed gradient into a base optimizer. Notably, GradPower requires only a single-line code change and no modifications to the base optimizer's internal logic, including the hyperparameters. When applied to Adam (termed AdamPower), GradPower consistently achieves lower terminal loss across diverse architectures (LLaMA, Qwen2MoE), parameter scales (66M to 2B), datasets (C4, OpenWebText), and learning-rate schedules (cosine, warmup-stable-decay). The most pronounced gains are observed when training modern mixture-of-experts models with warmup-stable-decay schedules. GradPower also integrates seamlessly with other state-of-the-art optimizers, such as Muon, yielding further improvements. Finally, we provide theoretical analyses that reveal the underlying mechanism of GradPower and highlights the influence of gradient noise.
Problem

Research questions and friction points this paper is trying to address.

Accelerates language model pre-training via gradient transformation
Improves training efficiency without modifying optimizer hyperparameters
Enhances performance in mixture-of-experts models and diverse architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight gradient-transformation technique
Single-line code change required
Seamless integration with existing optimizers
πŸ”Ž Similar Papers
No similar papers found.
Mingze Wang
Mingze Wang
School of Mathematical Sciences, Peking University
Machine Learning TheoryDeep Learning TheoryOptimization
Jinbo Wang
Jinbo Wang
Texas A & M University
Ocean dynamics
J
Jiaqi Zhang
Meituan, Beijing
W
Wei Wang
Meituan, Beijing
P
Peng Pei
Meituan, Beijing
X
Xunliang Cai
Meituan, Beijing
E
E. Weinan
School of Mathematical Sciences, Peking University; AI for Science Institute, Beijing
L
Lei Wu
School of Mathematical Sciences, Peking University; AI for Science Institute, Beijing