APOLLO: SGD-like Memory, AdamW-level Performance

📅 2024-12-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive memory overhead of the AdamW optimizer in large language model (LLM) training—which severely limits scalability—this paper proposes APOLLO-Mini, a structured learning rate scaling mechanism. Its core innovation lies in decoupling AdamW’s adaptive learning rate into coarse-grained, structured updates and introducing low-rank (e.g., rank-1) auxiliary states via pure random projection, achieving SGD-level memory consumption (only 2× parameter memory) while retaining AdamW-level convergence. The method integrates low-rank state modeling, weight quantization, and naive data parallelism (DDP), requiring no system-level modifications. Experiments demonstrate a 3× throughput improvement on 8× A100 GPUs; enable LLaMA-7B pretraining on a single 12GB GPU; scale to LLaMA-13B under DDP; and surpass AdamW in pretraining quality.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are notoriously memory-intensive during training, particularly with the popular AdamW optimizer. This memory burden necessitates using more or higher-end GPUs or reducing batch sizes, limiting training scalability and throughput. To address this, various memory-efficient optimizers have been proposed to reduce optimizer memory usage. However, they face critical challenges: (i) reliance on costly SVD operations; (ii) significant performance trade-offs compared to AdamW; and (iii) still substantial optimizer memory overhead to maintain competitive performance. In this work, we identify that AdamW's learning rate adaptation rule can be effectively coarsened as a structured learning rate update. Based on this insight, we propose Approximated Gradient Scaling for Memory-Efficient LLM Optimization (APOLLO), which approximates learning rate scaling using an auxiliary low-rank optimizer state based on pure random projection. This structured learning rate update rule makes APOLLO highly tolerant to further memory reductions while delivering comparable pre-training performance. Even its rank-1 variant, APOLLO-Mini, achieves superior pre-training performance compared to AdamW with SGD-level memory costs. Extensive experiments demonstrate that the APOLLO series performs on-par with or better than AdamW, while achieving greater memory savings by nearly eliminating the optimization states of AdamW. These savings provide significant system-level benefits: (1) Enhanced Throughput: 3x throughput on an 8xA100-80GB setup compared to AdamW by supporting 4x larger batch sizes. (2) Improved Model Scalability: Pre-training LLaMA-13B with naive DDP on A100-80GB GPUs without system-level optimizations. (3) Low-End GPU Friendly Pre-training: Pre-training LLaMA-7B on a single GPU using less than 12 GB of memory with weight quantization.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
AdamW Optimizer
Memory Consumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

APOLLO algorithm
memory-efficient optimization
large language model training
🔎 Similar Papers
No similar papers found.
Hanqing Zhu
Hanqing Zhu
University of Texas at Austin
Hardware/System-aware AIHardware for AI
Z
Zhenyu (Allen) Zhang
Department of Electrical and Computer Engineering, The University of Texas at Austin
W
Wenyan Cong
Department of Electrical and Computer Engineering, The University of Texas at Austin
X
Xi Liu
AI at Meta
S
Sem Park
AI at Meta
Vikas Chandra
Vikas Chandra
Meta
AI Research
Bo Long
Bo Long
Machine Learning
data miningmachine learning
David Z. Pan
David Z. Pan
Professor, Silicon Labs Endowed Chair, ECE Dept., University of Texas at Austin
Electronic Design AutomationDesign for ManufacturingVLSIHardwareMachine Learning
Z
Zhangyang Wang
Department of Electrical and Computer Engineering, The University of Texas at Austin
J
Jinwon Lee
AI at Meta