Taming LLMs by Scaling Learning Rates with Gradient Grouping

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adaptive optimizers (e.g., AdamW) in large language model (LLM) training suffer from inaccurate per-parameter learning rate estimation, leading to training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) methods. Method: This paper proposes Scaled Gradient Grouping (SGG), an optimizer wrapper that introduces dynamic gradient clustering and group-specific scaling while preserving per-parameter adaptivity. SGG performs intra-layer gradient clustering based on statistical similarity, applies uniform scaling within each group, and calibrates scaling factors differentially across groups—thereby imposing collective group-level constraints. Contribution/Results: Experiments across multiple LLM scales demonstrate that SGG accelerates convergence, improves robustness to batch size and learning rate variations, and seamlessly integrates with diverse PEFT techniques—yielding consistent performance gains without architectural or training pipeline modifications.

Technology Category

Application Category

📝 Abstract
Training large language models (LLMs) poses challenges due to their massive scale and heterogeneous architectures. While adaptive optimizers like AdamW help address gradient variations, they still struggle with efficient and effective parameter-wise learning rate estimation, resulting in training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) techniques. This work introduces Scaling with Gradient Grouping (SGG), an optimizer wrapper that improves adaptive learning rate estimation by dynamic grouping and group-specific scaling. SGG first groups gradient statistics in each layer into clusters and then applies cluster-specific scaling to calibrate learning rates for each parameter, thus imposing collective group-wise constraints while maintaining precise per-parameter adaptation. Experiments on diverse (M)LLM benchmarks show that SGG integrates seamlessly with existing optimizers, and offers consistent gains and faster convergence over baselines, with various model sizes. Its stability across varying batch sizes and learning rates establishes SGG as a robust choice for LLM optimization.
Problem

Research questions and friction points this paper is trying to address.

Addresses training instability in large language models
Improves learning rate estimation for heterogeneous architectures
Enhances compatibility with parameter-efficient fine-tuning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic gradient grouping for learning rate scaling
Cluster-specific scaling for precise adaptation
Seamless integration with existing optimizers
🔎 Similar Papers
No similar papers found.
S
Siyuan Li
Zhejiang University, Westlake University
J
Juanxi Tian
Westlake University, Peking University
Zedong Wang
Zedong Wang
The Hong Kong University of Science and Technology (HKUST)
Deep LearningComputer VisionMulti-task Learning
X
Xin Jin
Westlake University
Z
Zicheng Liu
Zhejiang University, Westlake University
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved
D
Dan Xu
The Hong Kong University of Science and Technology