🤖 AI Summary
To address the trade-off between slow convergence of zeroth-order (ZO) optimizers and high memory overhead of first-order optimizers in full-parameter fine-tuning of large language models (LLMs), this paper proposes FZOO, an efficient zeroth-order optimizer. Methodologically, FZOO introduces three key innovations: (1) batched one-sided gradient estimation leveraging Rademacher perturbations and CUDA-parallelized forward passes; (2) an adaptive step-size mechanism based on batch loss standard deviation—rigorously proven to be equivalent to normalized SGD with convergence guarantees; and (3) native compatibility with parameter-efficient fine-tuning (PEFT) methods. Evaluated on RoBERTa-large, FZOO achieves a 5.6% accuracy gain over MeZO, reduces forward-pass count by 18×, attains 3% higher average accuracy, and matches Adam’s convergence speed—enabling memory-efficient full-parameter fine-tuning on a single GPU.
📝 Abstract
Fine-tuning large language models (LLMs) often faces GPU memory bottlenecks: the backward pass of first-order optimizers like Adam increases memory usage to more than 10 times the inference level (e.g., 633 GB for OPT-30B). Zeroth-order (ZO) optimizers avoid this cost by estimating gradients only from forward passes, yet existing methods like MeZO usually require many more steps to converge. Can this trade-off between speed and memory in ZO be fundamentally improved? Normalized-SGD demonstrates strong empirical performance with greater memory efficiency than Adam. In light of this, we introduce FZOO, a Fast Zeroth-Order Optimizer toward Adam-Scale Speed. FZOO reduces the total forward passes needed for convergence by employing batched one-sided estimates that adapt step sizes based on the standard deviation of batch losses. It also accelerates per-batch computation through the use of Rademacher random vector perturbations coupled with CUDA's parallel processing. Extensive experiments on diverse models, including RoBERTa-large, OPT (350M-66B), Phi-2, and Llama3, across 11 tasks validate FZOO's effectiveness. On average, FZOO outperforms MeZO by 3 percent in accuracy while requiring 3 times fewer forward passes. For RoBERTa-large, FZOO achieves average improvements of 5.6 percent in accuracy and an 18 times reduction in forward passes compared to MeZO, achieving convergence speeds comparable to Adam. We also provide theoretical analysis proving FZOO's formal equivalence to a normalized-SGD update rule and its convergence guarantees. FZOO integrates smoothly into PEFT techniques, enabling even larger memory savings. Overall, our results make single-GPU, high-speed, full-parameter fine-tuning practical and point toward future work on memory-efficient pre-training.