ConMeZO: Adaptive Descent-Direction Sampling for Gradient-Free Finetuning of Large Language Models

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address slow convergence in zeroth-order fine-tuning of large language models (LLMs) caused by high-dimensional random parameter-space search, this paper proposes ConeZO: a conical momentum-guided adaptive directional sampling method. ConeZO constrains zeroth-order random direction sampling to a cone centered around a momentum-estimated direction, thereby focusing exploration on directions more likely aligned with the true gradient and mitigating the curse of dimensionality. The method requires only two forward passes per iteration and no backward propagation, with memory overhead comparable to MeZO. We provide theoretical analysis showing that ConeZO achieves the same worst-case convergence rate as MeZO. Empirical evaluation on natural language understanding and generation tasks demonstrates up to a 2× improvement in convergence speed over MeZO and superior performance relative to existing zeroth-order optimizers.

Technology Category

Application Category

📝 Abstract
Zeroth-order or derivative-free optimization (MeZO) is an attractive strategy for finetuning large language models (LLMs) because it eliminates the memory overhead of backpropagation. However, it converges slowly due to the inherent curse of dimensionality when searching for descent directions in the high-dimensional parameter space of billion-scale LLMs. We propose ConMeZO, a novel zeroth-order optimizer that accelerates convergence by adaptive directional sampling. Instead of drawing the direction uniformly at random, ConMeZO restricts the sampling to a cone centered around a momentum estimate. This concentrates the search in directions where the true gradient is more likely to lie and thus reduces the effect of high dimensions. We prove that ConMeZO achieves the same worst-case convergence rate as MeZO. Empirically, when finetuning LLMs on natural language tasks, ConMeZO is up to 2X faster than MeZO while retaining the low-memory footprint of zeroth-order methods.
Problem

Research questions and friction points this paper is trying to address.

Accelerates gradient-free finetuning of large language models
Reduces slow convergence in high-dimensional parameter spaces
Maintains low memory usage while improving optimization speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive directional sampling for zeroth-order optimization
Cone-based sampling around momentum estimate
Faster convergence with low-memory footprint
🔎 Similar Papers
No similar papers found.
L
Lejs Deen Behric
Department of Computer Science, ETH Zurich
L
Liang Zhang
Department of Computer Science, ETH Zurich
Bingcong Li
Bingcong Li
ETH Zurich
optimizationLLMsfine-tuning
K
K. Thekumparampil
Amazon AGI Labs