Explicit and Non-asymptotic Query Complexities of Rank-Based Zeroth-order Algorithms on Smooth Functions

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing lack of non-asymptotic theoretical guarantees for ranking-based zeroth-order (ZO) optimization methods—such as CMA-ES and natural evolutionary strategies. We establish, for the first time, explicit non-asymptotic query complexity bounds for top-$k$ direction selection algorithms. Departing from conventional drift analysis and information-geometric approaches, we introduce a novel analytical framework that uncovers the fundamental mechanisms enabling efficient optimization under ranking feedback. For $mu$-strongly convex $L$-smooth functions, our bound is $ ilde{O}ig((dL/mu)log(1/varepsilon)ig)$; for nonconvex $L$-smooth functions, it is $Oig((dL/varepsilon)log(1/varepsilon)ig)$, holding with probability at least $1-delta$. These results provide the first rigorous quantification of the robustness–efficiency trade-off induced by ranking feedback, thereby filling a critical theoretical gap in ranking-based ZO optimization.

Technology Category

Application Category

📝 Abstract
Rank-based zeroth-order (ZO) optimization -- which relies only on the ordering of function evaluations -- offers strong robustness to noise and monotone transformations, and underlies many successful algorithms such as CMA-ES, natural evolution strategies, and rank-based genetic algorithms. Despite its widespread use, the theoretical understanding of rank-based ZO methods remains limited: existing analyses provide only asymptotic insights and do not yield explicit convergence rates for algorithms selecting the top-$k$ directions. This work closes this gap by analyzing a simple rank-based ZO algorithm and establishing the first emph{explicit}, and emph{non-asymptotic} query complexities. For a $d$-dimension problem, if the function is $L$-smooth and $μ$-strongly convex, the algorithm achieves $widetilde{mathcal O}!left(frac{dL}μlog!frac{dL}{μδ}log!frac{1}{varepsilon} ight)$ to find an $varepsilon$-suboptimal solution, and for smooth nonconvex objectives it reaches $mathcal O!left(frac{dL}{varepsilon}log!frac{1}{varepsilon} ight)$. Notation $cO(cdot)$ hides constant terms and $widetilde{mathcal O}(cdot)$ hides extra $loglogfrac{1}{varepsilon}$ term. These query complexities hold with a probability at least $1-δ$ with $0<δ<1$. The analysis in this paper is novel and avoids classical drift and information-geometric techniques. Our analysis offers new insight into why rank-based heuristics lead to efficient ZO optimization.
Problem

Research questions and friction points this paper is trying to address.

Establishes explicit query complexities for rank-based zeroth-order optimization
Provides non-asymptotic convergence rates for smooth convex and nonconvex functions
Analyzes a simple algorithm to explain efficiency of rank-based heuristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rank-based zeroth-order algorithm using ordering of function evaluations
Explicit non-asymptotic query complexity analysis for smooth functions
Novel analysis avoiding classical drift and information-geometric techniques
🔎 Similar Papers
No similar papers found.