Gradient Multi-Normalization for Stateless and Scalable LLM Training

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adaptive optimizers like Adam incur prohibitive GPU memory overhead in large language model (LLM) training due to storage of momentum and second-moment states. Method: This paper proposes SWAN (Stateless Weighted Adaptive Normalization), the first stateless optimization framework based on multi-norm gradient normalization. Theoretically, we establish a unified multi-norm constrained optimization framework and prove SWAN as a principled instantiation. Methodologically, SWAN employs a lightweight relaxation algorithm that replaces explicit state storage with alternating gradient normalization and stochastic preconditioning—preserving convergence guarantees while drastically reducing computational complexity. Results: In billion-parameter LLaMA pretraining, SWAN achieves 3× speedup over Adam and substantially reduces GPU memory consumption, outperforming existing memory-efficient optimizers in both efficiency and model performance.

Technology Category

Application Category

📝 Abstract
Training large language models (LLMs) typically relies on adaptive optimizers like Adam (Kingma&Ba, 2015) which store additional state information to accelerate convergence but incur significant memory overhead. Recent efforts, such as SWAN (Ma et al., 2024) address this by eliminating the need for optimizer states while achieving performance comparable to Adam via a multi-step preprocessing procedure applied to instantaneous gradients. Motivated by the success of SWAN, we introduce a novel framework for designing stateless optimizers that normalizes stochastic gradients according to multiple norms. To achieve this, we propose a simple alternating scheme to enforce the normalization of gradients w.r.t these norms. We show that our procedure can produce, up to an arbitrary precision, a fixed-point of the problem, and that SWAN is a particular instance of our approach with carefully chosen norms, providing a deeper understanding of its design. However, SWAN's computationally expensive whitening/orthogonalization step limit its practicality for large LMs. Using our principled perspective, we develop of a more efficient, scalable, and practical stateless optimizer. Our algorithm relaxes the properties of SWAN, significantly reducing its computational cost while retaining its memory efficiency, making it applicable to training large-scale models. Experiments on pre-training LLaMA models with up to 1 billion parameters demonstrate a 3X speedup over Adam with significantly reduced memory requirements, outperforming other memory-efficient baselines.
Problem

Research questions and friction points this paper is trying to address.

Develops stateless optimizers for LLM training
Reduces memory overhead in optimizer design
Enhances scalability and efficiency in large models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stateless optimizer design
Multi-norm gradient normalization
Efficient large-scale model training
🔎 Similar Papers
No similar papers found.