Zeroth-Order Methods for Stochastic Nonconvex Nonsmooth Composite Optimization

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies stochastic nonconvex nonsmooth composite optimization without smoothness assumptions—such as Lipschitz continuity of gradients—arising in applications like regularized ReLU neural networks and sparse support matrix machines. To address the lack of theoretical guarantees for existing zeroth-order algorithms, we propose two novel definitions of approximate stationarity and establish, for the first time, finite-time convergence rates for zeroth-order stochastic methods under purely nonsmooth nonconvex settings. Methodologically, our approach integrates stochastic difference estimation with composite-structure decoupling, requiring no gradient information whatsoever. Numerical experiments demonstrate the efficacy and robustness of the proposed algorithms on real-world machine learning tasks. This work provides both new theoretical foundations and practical tools for black-box optimization of nondifferentiable deep models.

Technology Category

Application Category

📝 Abstract
This work aims to solve a stochastic nonconvex nonsmooth composite optimization problem. Previous works on composite optimization problem requires the major part to satisfy Lipschitz smoothness or some relaxed smoothness conditions, which excludes some machine learning examples such as regularized ReLU network and sparse support matrix machine. In this work, we focus on stochastic nonconvex composite optimization problem without any smoothness assumptions. In particular, we propose two new notions of approximate stationary points for such optimization problem and obtain finite-time convergence results of two zeroth-order algorithms to these two approximate stationary points respectively. Finally, we demonstrate that these algorithms are effective using numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Solving stochastic nonconvex nonsmooth composite optimization problems
Removing smoothness assumptions for broader machine learning applications
Developing zeroth-order algorithms with finite-time convergence guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zeroth-order methods for nonsmooth composite optimization
Two new notions of approximate stationary points
Finite-time convergence without smoothness assumptions
🔎 Similar Papers
No similar papers found.