KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory overhead of first-order methods and the slow, unstable convergence of zeroth-order (ZO) optimization—caused by bias in gradient estimation—during large language model (LLM) fine-tuning, this paper proposes Kernel-guided Zeroth-Order optimization (KZO). We theoretically characterize the low-order bias inherent in ZO gradient estimators for the first time and design a physics-inspired kernel framework to explicitly suppress this bias, thereby significantly improving gradient estimation accuracy and optimization stability. KZO is compatible with parameter-efficient fine-tuning paradigms (e.g., LoRA) and supports forward-mode gradient estimation. Experiments on OPT-2.7B demonstrate that, compared to MeZO, KZO achieves 2.9% and 2.6% absolute accuracy gains on WSC and MultiRC, respectively, reduces GPU training time by 74% and 44%, and substantially decreases the number of convergence iterations.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated impressive capabilities across numerous NLP tasks. Nevertheless, conventional first-order fine-tuning techniques impose heavy memory demands, creating practical obstacles to real-world applications. Zeroth-order (ZO) optimization has recently emerged as a promising memory-efficient alternative, as it circumvents the need for backpropagation by estimating gradients solely through forward passes--making it particularly suitable for resource-limited environments. Despite its efficiency, ZO optimization suffers from gradient estimation bias, which significantly hinders convergence speed. To address this, we analytically identify and characterize the lower-order bias introduced during ZO-based gradient estimation in LLM fine-tuning. Motivated by tools in mathematical physics, we introduce a kernel-function-based ZO framework aimed at mitigating this bias and improving optimization stability. KerZOO achieves comparable or superior performance to existing ZO baselines in both full-parameter and parameter-efficient fine-tuning settings of LLMs, while significantly reducing the number of iterations required to reach convergence. For example, KerZOO reduces total GPU training hours by as much as 74% and 44% on WSC and MultiRC datasets in fine-tuning OPT-2.7B model and can exceed the MeZO baseline by 2.9% and 2.6% in accuracy. We show that the kernel function is an effective avenue for reducing estimation bias in ZO methods.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory demands in LLM fine-tuning
Mitigating gradient estimation bias in ZO optimization
Improving convergence speed and accuracy in fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kernel-function-based ZO framework reduces bias
Mitigates gradient estimation bias in LLMs
Significantly accelerates convergence with fewer iterations
🔎 Similar Papers
No similar papers found.