🤖 AI Summary
During large language model (LLM) pretraining, scaling up model size frequently triggers loss spikes, severely compromising training stability and final performance. To address the limitations of conventional global gradient clipping—which fails to account for parameter heterogeneity and dynamic gradient decay—we propose AdaGC, an adaptive gradient clipping framework. AdaGC dynamically assigns per-parameter local clipping thresholds, updated via exponential moving averages of gradient norms; we theoretically establish its convergence guarantee under non-convex optimization and ensure compatibility with mainstream optimizers (e.g., AdamW, Lion) and architectures (e.g., Llama-2, CLIP), including multimodal settings. Experiments demonstrate that AdaGC eliminates loss spikes entirely on Llama-2 7B/13B, reduces WikiText perplexity by 3.5%, and accelerates convergence by 25% for CLIP ViT-Base.
📝 Abstract
Large Language Models (LLMs) face increasing loss spikes during scaling, undermining training stability and final performance. While gradient clipping mitigates this issue, traditional global approaches poorly handle parameter-specific gradient variations and decaying gradient norms. We propose **AdaGC**, an adaptive gradient clipping framework that automatically adjusts local thresholds per parameter through exponential moving average of gradient norms. Theoretical analysis proves AdaGC's convergence under non-convex conditions. Extensive experiments demonstrate significant improvements: On Llama-2 7B/13B, AdaGC completely eliminates loss spikes while reducing WikiText perplexity by 3.5% (+0.14pp LAMBADA accuracy) for 7B and achieving 0.65% lower training loss with 1.47% reduced validation perplexity for 13B compared to global clipping. For CLIP ViT-Base, AdaGC converges 25% faster than StableAdamW with full spike elimination. The method shows universal effectiveness across architectures (Llama-2 7B/13B) and modalities (CLIP), with successful integration into diverse optimizers like AdamW and Lion. Source code will be released on GitHub.