AdaGC: Improving Training Stability for Large Language Model Pretraining

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
During large language model (LLM) pretraining, scaling up model size frequently triggers loss spikes, severely compromising training stability and final performance. To address the limitations of conventional global gradient clipping—which fails to account for parameter heterogeneity and dynamic gradient decay—we propose AdaGC, an adaptive gradient clipping framework. AdaGC dynamically assigns per-parameter local clipping thresholds, updated via exponential moving averages of gradient norms; we theoretically establish its convergence guarantee under non-convex optimization and ensure compatibility with mainstream optimizers (e.g., AdamW, Lion) and architectures (e.g., Llama-2, CLIP), including multimodal settings. Experiments demonstrate that AdaGC eliminates loss spikes entirely on Llama-2 7B/13B, reduces WikiText perplexity by 3.5%, and accelerates convergence by 25% for CLIP ViT-Base.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) face increasing loss spikes during scaling, undermining training stability and final performance. While gradient clipping mitigates this issue, traditional global approaches poorly handle parameter-specific gradient variations and decaying gradient norms. We propose **AdaGC**, an adaptive gradient clipping framework that automatically adjusts local thresholds per parameter through exponential moving average of gradient norms. Theoretical analysis proves AdaGC's convergence under non-convex conditions. Extensive experiments demonstrate significant improvements: On Llama-2 7B/13B, AdaGC completely eliminates loss spikes while reducing WikiText perplexity by 3.5% (+0.14pp LAMBADA accuracy) for 7B and achieving 0.65% lower training loss with 1.47% reduced validation perplexity for 13B compared to global clipping. For CLIP ViT-Base, AdaGC converges 25% faster than StableAdamW with full spike elimination. The method shows universal effectiveness across architectures (Llama-2 7B/13B) and modalities (CLIP), with successful integration into diverse optimizers like AdamW and Lion. Source code will be released on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Improves training stability for large language models.
Reduces loss spikes during model scaling.
Adapts gradient clipping thresholds per parameter.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive gradient clipping framework
Automatically adjusts local thresholds
Exponential moving average norms
🔎 Similar Papers
No similar papers found.
G
Guoxia Wang
Baidu Inc., China
S
Shuai Li
Baidu Inc., China
Congliang Chen
Congliang Chen
Ph.D. Student, the Chinese University of Hong Kong (Shenzhen)
OptimizationMachine Learning
J
Jinle Zeng
Baidu Inc., China
J
Jiabin Yang
Baidu Inc., China
T
Tao Sun
National University of Defense Technology, China
Y
Yanjun Ma
Baidu Inc., China
Dianhai Yu
Dianhai Yu
Baidu
Deep LearningNatural Language ProcessingMachine LearningArtificial intelligence
L
Li Shen
Shenzhen Campus of Sun Yat-sen University, China