🤖 AI Summary
This work addresses the inherent trade-off between estimation accuracy and query efficiency in zeroth-order optimization for large-scale model fine-tuning. The authors propose ZO-Muon, a novel method that uniquely integrates subspace projection with gradient orthogonalization. By projecting gradients onto a low-rank subspace, the approach reduces variance in gradient estimates, while simultaneously applying a Muon-inspired spectral optimization to orthogonalize zeroth-order gradients, thereby establishing a unified framework for subspace-based gradient orthogonalization. Empirical results demonstrate significant performance gains: on SST-2, ZO-Muon achieves comparable accuracy to MeZO using only 24.7% of its query budget, and on CIFAR-100, it improves the accuracy of ViT-B by 25.1%, effectively achieving joint optimization of both accuracy and query efficiency.
📝 Abstract
Zeroth-order (ZO) optimization provides a gradient-free alternative to first-order (FO) methods by estimating gradients via finite differences of function evaluations, and has recently emerged as a memory-efficient paradigm for fine-tuning large-scale models by avoiding backpropagation. However, ZO optimization has a fundamental tension between accuracy and query efficiency. In this work, we show that ZO optimization can be substantially improved by unifying two complementary principles: (i) a projection-based subspace view that reduces gradient estimation variance by exploiting the intrinsic low-rank structure of model updates, and (ii) Muon-style spectral optimization that applies gradient orthogonalization to extract informative spectral structure from noisy ZO gradients. These findings form a unified framework of subspace gradient orthogonalization, which we instantiate in a new method, ZO-Muon, admitting a natural interpretation as a low-rank Muon optimizer in the ZO setting. Extensive experiments on large language models (LLMs) and vision transformers (ViTs) demonstrate that ZO-Muon significantly accelerates convergence and achieves a win-win improvement in accuracy and query/runtime efficiency. Notably, compared to the popular MeZO baseline, ZO-Muon requires only 24.7% of the queries to reach the same SST-2 performance for LLM fine-tuning, and improves accuracy by 25.1% on ViT-B fine-tuning on CIFAR-100.