High-Probability Analysis of Online and Federated Zero-Order Optimisation

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates high-probability convergence of zeroth-order (gradient-free) optimization in distributed learning. We propose FedZero, a federated zeroth-order algorithm applicable to both federated and single-device settings, which employs a randomized gradient estimator based on uniform sampling over the ℓ₁-sphere. Theoretically, we establish— for the first time—the high-probability convergence guarantees (rather than conventional expectation-based bounds) for convex zeroth-order optimization in both single-device and federated settings. A key technical contribution is a novel Lipschitz concentration inequality over the ℓ₁-sphere with explicit constants. Our analysis shows that FedZero achieves a near-optimal high-probability error bound for convex federated optimization. Moreover, in the single-device setting, it yields the first high-probability O(1/√T) convergence rate—strictly stronger than classical expectation-based bounds.

Technology Category

Application Category

📝 Abstract
We study distributed learning in the setting of gradient-free zero-order optimization and introduce FedZero, a federated zero-order algorithm that delivers sharp theoretical guarantees. Specifically, FedZero: (1) achieves near-optimal optimization error bounds with high probability in the federated convex setting; and (2) in the single-worker regime-where the problem reduces to the standard zero-order framework, establishes the first high-probability convergence guarantees for convex zero-order optimization, thereby strengthening the classical expectation-based results. At its core, FedZero employs a gradient estimator based on randomization over the $ell_1$-sphere. To analyze it, we develop new concentration inequalities for Lipschitz functions under the uniform measure on the $ell_1$-sphere, with explicit constants. These concentration tools are not only central to our high-probability guarantees but may also be of independent interest.
Problem

Research questions and friction points this paper is trying to address.

Develops FedZero for federated zero-order optimization with theoretical guarantees
Establishes high-probability convergence for convex zero-order single-worker optimization
Introduces new concentration inequalities for Lipschitz functions on l1-sphere
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated zero-order algorithm with sharp guarantees
Gradient estimator using randomization on l1-sphere
New concentration inequalities for Lipschitz functions
🔎 Similar Papers
No similar papers found.