Model Evolution Under Zeroth-Order Optimization: A Neural Tangent Kernel Perspective

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a clear theoretical characterization for zeroth-order optimization dynamics—akin to the neural tangent kernel (NTK) framework for first-order methods—stemming from the stochasticity in gradient estimation. To bridge this gap, we introduce the Neural Zeroth-order Kernel (NZK), which describes the evolution of models in function space under zeroth-order updates and establishes its connection to kernel gradient descent. We theoretically prove that, under linear models with squared loss, the expected NZK remains constant throughout training, enabling a closed-form solution for model evolution; this result extends to linearized neural networks. Experiments on synthetic data and benchmarks including MNIST, CIFAR-10, and Tiny ImageNet validate our theory and demonstrate that using a single shared random vector significantly accelerates convergence.

Technology Category

Application Category

📝 Abstract
Zeroth-order (ZO) optimization enables memory-efficient training of neural networks by estimating gradients via forward passes only, eliminating the need for backpropagation. However, the stochastic nature of gradient estimation significantly obscures the training dynamics, in contrast to the well-characterized behavior of first-order methods under Neural Tangent Kernel (NTK) theory. To address this, we introduce the Neural Zeroth-order Kernel (NZK) to describe model evolution in function space under ZO updates. For linear models, we prove that the expected NZK remains constant throughout training and depends explicitly on the first and second moments of the random perturbation directions. This invariance yields a closed-form expression for model evolution under squared loss. We further extend the analysis to linearized neural networks. Interpreting ZO updates as kernel gradient descent via NZK provides a novel perspective for potentially accelerating convergence. Extensive experiments across synthetic and real-world datasets (including MNIST, CIFAR-10, and Tiny ImageNet) validate our theoretical results and demonstrate acceleration when using a single shared random vector.
Problem

Research questions and friction points this paper is trying to address.

Zeroth-order optimization
Neural Tangent Kernel
Model evolution
Gradient estimation
Training dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zeroth-order optimization
Neural Tangent Kernel
Neural Zeroth-order Kernel
Kernel gradient descent
Model evolution
🔎 Similar Papers
No similar papers found.
Chen Zhang
Chen Zhang
The University of Hong Kong
Statistical machine learningNonparametric methods
Y
Yuxin Cheng
The University of Hong Kong
C
Chenchen Ding
The University of Hong Kong
S
Shuqi Wang
The University of Hong Kong
Jingreng Lei
Jingreng Lei
The University of Hong Kong
AI for Wireless Commun.OptimizationDistributed AlgorithmsRobotics Perception
Runsheng Yu
Runsheng Yu
Unknown affiliation
Y
Yik-Chung Wu
The University of Hong Kong
N
Ngai Wong
The University of Hong Kong