Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses decentralized online optimization on Riemannian manifolds of positive curvature—extending beyond the conventional restriction to Hadamard (nonpositively curved) manifolds. The core challenge lies in the absence of global geodesic convexity of distance functions under positive curvature, which impedes consensus convergence. To overcome this, we propose a curvature-aware decentralized consensus protocol, achieving linear convergence of consensus steps on non-Hadamard manifolds for the first time. Integrated with Riemannian gradient descent, two-point bandit gradient estimation, and smoothing techniques, our framework establishes a sub-convexity analysis and attains an $O(sqrt{T})$ dynamic regret bound. Experiments validate both theoretical guarantees and computational efficiency on positively curved manifolds. This work provides the first provably convergent decentralized Riemannian learning framework applicable to general positive-curvature settings.

Technology Category

Application Category

📝 Abstract
We study decentralized online Riemannian optimization over manifolds with possibly positive curvature, going beyond the Hadamard manifold setting. Decentralized optimization techniques rely on a consensus step that is well understood in Euclidean spaces because of their linearity. However, in positively curved Riemannian spaces, a main technical challenge is that geodesic distances may not induce a globally convex structure. In this work, we first analyze a curvature-aware Riemannian consensus step that enables a linear convergence beyond Hadamard manifolds. Building on this step, we establish a $O(sqrt{T})$ regret bound for the decentralized online Riemannian gradient descent algorithm. Then, we investigate the two-point bandit feedback setup, where we employ computationally efficient gradient estimators using smoothing techniques, and we demonstrate the same $O(sqrt{T})$ regret bound through the subconvexity analysis of smoothed objectives.
Problem

Research questions and friction points this paper is trying to address.

Decentralized online optimization on positively curved manifolds
Overcoming non-convexity from geodesic distances in Riemannian spaces
Achieving sublinear regret with bandit feedback and efficient estimators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curvature-aware Riemannian consensus for non-Hadamard manifolds
Decentralized online Riemannian gradient descent with O(√T) regret
Bandit feedback with efficient gradient estimators via smoothing
🔎 Similar Papers
No similar papers found.
E
Emre Sahinoglu
Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA 02115, USA
Shahin Shahrampour
Shahin Shahrampour
Assistant Professor, Northeastern University
Optimization and ControlMulti-Agent SystemsMachine LearningReinforcement Learning