๐ค AI Summary
This work proposes Hierarchical Zeroth-Order Optimization (HZO), a novel approach that overcomes the poor scalability of conventional zeroth-order methodsโwhose query complexity scales as $O(ML^2)$โto deep neural networks. By introducing a divide-and-conquer strategy along the network depth, HZO departs from the standard layer-wise gradient propagation paradigm and reduces the query complexity to $O(ML \log L)$. The method integrates hierarchical decomposition, rigorous error analysis, and Lipschitz constant control to ensure numerical stability, particularly in the near-unitary regime. Empirical evaluations on CIFAR-10 and ImageNet demonstrate that HZO achieves accuracy comparable to backpropagation, substantially enhancing the scalability and practicality of zeroth-order optimization for deep models.
๐ Abstract
Zeroth-order (ZO) optimization has long been favored for its biological plausibility and its capacity to handle non-differentiable objectives, yet its computational complexity has historically limited its application in deep neural networks. Challenging the conventional paradigm that gradients propagate layer-by-layer, we propose Hierarchical Zeroth-Order (HZO) optimization, a novel divide-and-conquer strategy that decomposes the depth dimension of the network. We prove that HZO reduces the query complexity from $O(ML^2)$ to $O(ML \log L)$ for a network of width $M$ and depth $L$, representing a significant leap over existing ZO methodologies. Furthermore, we provide a detailed error analysis showing that HZO maintains numerical stability by operating near the unitary limit ($L_{lip} \approx 1$). Extensive evaluations on CIFAR-10 and ImageNet demonstrate that HZO achieves competitive accuracy compared to backpropagation.