🤖 AI Summary
This work addresses the lack of a clear theoretical characterization for zeroth-order optimization dynamics—akin to the neural tangent kernel (NTK) framework for first-order methods—stemming from the stochasticity in gradient estimation. To bridge this gap, we introduce the Neural Zeroth-order Kernel (NZK), which describes the evolution of models in function space under zeroth-order updates and establishes its connection to kernel gradient descent. We theoretically prove that, under linear models with squared loss, the expected NZK remains constant throughout training, enabling a closed-form solution for model evolution; this result extends to linearized neural networks. Experiments on synthetic data and benchmarks including MNIST, CIFAR-10, and Tiny ImageNet validate our theory and demonstrate that using a single shared random vector significantly accelerates convergence.
📝 Abstract
Zeroth-order (ZO) optimization enables memory-efficient training of neural networks by estimating gradients via forward passes only, eliminating the need for backpropagation. However, the stochastic nature of gradient estimation significantly obscures the training dynamics, in contrast to the well-characterized behavior of first-order methods under Neural Tangent Kernel (NTK) theory. To address this, we introduce the Neural Zeroth-order Kernel (NZK) to describe model evolution in function space under ZO updates. For linear models, we prove that the expected NZK remains constant throughout training and depends explicitly on the first and second moments of the random perturbation directions. This invariance yields a closed-form expression for model evolution under squared loss. We further extend the analysis to linearized neural networks. Interpreting ZO updates as kernel gradient descent via NZK provides a novel perspective for potentially accelerating convergence. Extensive experiments across synthetic and real-world datasets (including MNIST, CIFAR-10, and Tiny ImageNet) validate our theoretical results and demonstrate acceleration when using a single shared random vector.