🤖 AI Summary
Adaptive optimizers (e.g., AdamW) in large language model (LLM) training suffer from inaccurate per-parameter learning rate estimation, leading to training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) methods.
Method: This paper proposes Scaled Gradient Grouping (SGG), an optimizer wrapper that introduces dynamic gradient clustering and group-specific scaling while preserving per-parameter adaptivity. SGG performs intra-layer gradient clustering based on statistical similarity, applies uniform scaling within each group, and calibrates scaling factors differentially across groups—thereby imposing collective group-level constraints.
Contribution/Results: Experiments across multiple LLM scales demonstrate that SGG accelerates convergence, improves robustness to batch size and learning rate variations, and seamlessly integrates with diverse PEFT techniques—yielding consistent performance gains without architectural or training pipeline modifications.
📝 Abstract
Training large language models (LLMs) poses challenges due to their massive scale and heterogeneous architectures. While adaptive optimizers like AdamW help address gradient variations, they still struggle with efficient and effective parameter-wise learning rate estimation, resulting in training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) techniques. This work introduces Scaling with Gradient Grouping (SGG), an optimizer wrapper that improves adaptive learning rate estimation by dynamic grouping and group-specific scaling. SGG first groups gradient statistics in each layer into clusters and then applies cluster-specific scaling to calibrate learning rates for each parameter, thus imposing collective group-wise constraints while maintaining precise per-parameter adaptation. Experiments on diverse (M)LLM benchmarks show that SGG integrates seamlessly with existing optimizers, and offers consistent gains and faster convergence over baselines, with various model sizes. Its stability across varying batch sizes and learning rates establishes SGG as a robust choice for LLM optimization.