FZOO: Fast Zeroth-Order Optimizer for Fine-Tuning Large Language Models towards Adam-Scale Speed

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between slow convergence of zeroth-order (ZO) optimizers and high memory overhead of first-order optimizers in full-parameter fine-tuning of large language models (LLMs), this paper proposes FZOO, an efficient zeroth-order optimizer. Methodologically, FZOO introduces three key innovations: (1) batched one-sided gradient estimation leveraging Rademacher perturbations and CUDA-parallelized forward passes; (2) an adaptive step-size mechanism based on batch loss standard deviation—rigorously proven to be equivalent to normalized SGD with convergence guarantees; and (3) native compatibility with parameter-efficient fine-tuning (PEFT) methods. Evaluated on RoBERTa-large, FZOO achieves a 5.6% accuracy gain over MeZO, reduces forward-pass count by 18×, attains 3% higher average accuracy, and matches Adam’s convergence speed—enabling memory-efficient full-parameter fine-tuning on a single GPU.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) often faces GPU memory bottlenecks: the backward pass of first-order optimizers like Adam increases memory usage to more than 10 times the inference level (e.g., 633 GB for OPT-30B). Zeroth-order (ZO) optimizers avoid this cost by estimating gradients only from forward passes, yet existing methods like MeZO usually require many more steps to converge. Can this trade-off between speed and memory in ZO be fundamentally improved? Normalized-SGD demonstrates strong empirical performance with greater memory efficiency than Adam. In light of this, we introduce FZOO, a Fast Zeroth-Order Optimizer toward Adam-Scale Speed. FZOO reduces the total forward passes needed for convergence by employing batched one-sided estimates that adapt step sizes based on the standard deviation of batch losses. It also accelerates per-batch computation through the use of Rademacher random vector perturbations coupled with CUDA's parallel processing. Extensive experiments on diverse models, including RoBERTa-large, OPT (350M-66B), Phi-2, and Llama3, across 11 tasks validate FZOO's effectiveness. On average, FZOO outperforms MeZO by 3 percent in accuracy while requiring 3 times fewer forward passes. For RoBERTa-large, FZOO achieves average improvements of 5.6 percent in accuracy and an 18 times reduction in forward passes compared to MeZO, achieving convergence speeds comparable to Adam. We also provide theoretical analysis proving FZOO's formal equivalence to a normalized-SGD update rule and its convergence guarantees. FZOO integrates smoothly into PEFT techniques, enabling even larger memory savings. Overall, our results make single-GPU, high-speed, full-parameter fine-tuning practical and point toward future work on memory-efficient pre-training.
Problem

Research questions and friction points this paper is trying to address.

Reducing GPU memory usage in LLM fine-tuning
Improving zeroth-order optimizer convergence speed
Balancing memory efficiency and Adam-scale performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Batched one-sided gradient estimates for efficiency
Rademacher vectors with CUDA for speed
Normalized-SGD equivalence for convergence guarantees
🔎 Similar Papers
No similar papers found.
Sizhe Dang
Sizhe Dang
PhD of Computer Science, Xi'an Jiaotong University
Computer VisionMultimodal AnalysisOptimization Analysis.
Y
Yangyang Guo
Xi’an Jiaotong University
Yanjun Zhao
Yanjun Zhao
UIUC
ml
Haishan Ye
Haishan Ye
西安交通大学
X
Xiaodong Zheng
Xi’an Jiaotong University
G
Guang Dai
SGIT AI Lab
I
Ivor Tsang
A*STAR