🤖 AI Summary
This paper addresses decentralized online optimization on Riemannian manifolds of positive curvature—extending beyond the conventional restriction to Hadamard (nonpositively curved) manifolds. The core challenge lies in the absence of global geodesic convexity of distance functions under positive curvature, which impedes consensus convergence. To overcome this, we propose a curvature-aware decentralized consensus protocol, achieving linear convergence of consensus steps on non-Hadamard manifolds for the first time. Integrated with Riemannian gradient descent, two-point bandit gradient estimation, and smoothing techniques, our framework establishes a sub-convexity analysis and attains an $O(sqrt{T})$ dynamic regret bound. Experiments validate both theoretical guarantees and computational efficiency on positively curved manifolds. This work provides the first provably convergent decentralized Riemannian learning framework applicable to general positive-curvature settings.
📝 Abstract
We study decentralized online Riemannian optimization over manifolds with possibly positive curvature, going beyond the Hadamard manifold setting. Decentralized optimization techniques rely on a consensus step that is well understood in Euclidean spaces because of their linearity. However, in positively curved Riemannian spaces, a main technical challenge is that geodesic distances may not induce a globally convex structure. In this work, we first analyze a curvature-aware Riemannian consensus step that enables a linear convergence beyond Hadamard manifolds. Building on this step, we establish a $O(sqrt{T})$ regret bound for the decentralized online Riemannian gradient descent algorithm. Then, we investigate the two-point bandit feedback setup, where we employ computationally efficient gradient estimators using smoothing techniques, and we demonstrate the same $O(sqrt{T})$ regret bound through the subconvexity analysis of smoothed objectives.