🤖 AI Summary
This work addresses the finite-time convergence of nonsmooth, nonconvex stochastic optimization on Riemannian manifolds—a setting lacking prior theoretical guarantees. To bridge this gap, we introduce, for the first time, a manifold-adapted Goldstein stationarity measure and propose two algorithms: the first-order RO²NC and the zero-order ZO-RO²NC. Both algorithms are proven to converge to a (δ, ε)-Goldstein stationary point with the optimal sample complexity of O(ε⁻³δ⁻¹), matching the best-known rate in Euclidean space. This constitutes the first finite-time convergence guarantee for fully nonsmooth, nonconvex stochastic optimization on Riemannian manifolds. Empirical evaluation demonstrates the efficacy and robustness of our methods on tasks including principal component analysis and manifold-constrained sparse regression.
📝 Abstract
This work addresses the finite-time analysis of nonsmooth nonconvex stochastic optimization under Riemannian manifold constraints. We adapt the notion of Goldstein stationarity to the Riemannian setting as a performance metric for nonsmooth optimization on manifolds. We then propose a Riemannian Online to NonConvex (RO2NC) algorithm, for which we establish the sample complexity of $O(ε^{-3}δ^{-1})$ in finding $(δ,ε)$-stationary points. This result is the first-ever finite-time guarantee for fully nonsmooth, nonconvex optimization on manifolds and matches the optimal complexity in the Euclidean setting. When gradient information is unavailable, we develop a zeroth order version of RO2NC algorithm (ZO-RO2NC), for which we establish the same sample complexity. The numerical results support the theory and demonstrate the practical effectiveness of the algorithms.