CurvZO: Adaptive Curvature-Guided Sparse Zeroth-Order Optimization for Efficient LLM Fine-Tuning

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory overhead of backpropagation in large language model fine-tuning and the slow, unstable convergence of existing zeroth-order optimization methods due to high-variance gradient estimates. To overcome these limitations, the authors propose an adaptive curvature-guided sparse zeroth-order optimization method that, for the first time, incorporates online curvature signals into zeroth-order optimization. This approach dynamically constructs a parameter sampling distribution to guide sparse coordinate updates and adaptively adjusts the perturbation budget, substantially reducing estimation variance. Experiments on OPT and Llama models demonstrate that the proposed method achieves up to a 4.4-percentage-point improvement in accuracy, accelerates training by up to 2×, and maintains low memory consumption.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) with backpropagation achieves high performance but incurs substantial memory overhead, limiting scalability on resource-constrained hardware. Zeroth-order (ZO) optimization provides a memory-efficient alternative by relying solely on forward passes, yet it typically suffers from slow or unstable convergence due to high-variance gradient estimates. Sparse ZO updates partially address this issue by perturbing only a subset of parameters, but their effectiveness hinges on selecting informative parameters, which is challenging in ZO optimization because each query yields only scalar feedback. We propose \textbf{Adaptive Curvature-Guided Sparse Zeroth-Order Optimization (CurvZO)}, which tracks curvature signals online from scalar ZO feedback and leverages these signals to construct a parameter-wise sampling distribution for selecting coordinates at each update, reducing the variance of the sparse ZO gradient estimator. Moreover, CurvZO dynamically adapts the perturbation budget to the evolving curvature signal distribution, yielding sparse ZO updates that remain both focused and sufficiently exploratory. Extensive experiments on OPT and Llama across diverse NLP tasks show that CurvZO consistently improves fine-tuning performance and reduces training time over ZO baselines. It improves accuracy by up to 4.4 points and achieves up to a $2\times$ speedup, while preserving memory efficiency.
Problem

Research questions and friction points this paper is trying to address.

Zeroth-Order Optimization
Large Language Models
Sparse Updates
Gradient Estimation Variance
Parameter Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zeroth-Order Optimization
Curvature-Guided Sampling
Sparse Updates
Adaptive Perturbation
Memory-Efficient Fine-Tuning
🔎 Similar Papers
No similar papers found.
S
Shuo Wang
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
Ziyu Chen
Ziyu Chen
Chonqing University
DCOPsMAS
Ming Tang
Ming Tang
Southern University of Science and Technology