Iso-Riemannian Optimization on Learned Data Manifolds

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional data often reside on low-dimensional Riemannian manifolds; however, existing Riemannian optimization methods face two fundamental bottlenecks on learned manifolds: the Levi-Civita connection does not guarantee constant-speed geodesics, and geodesic convexity—commonly assumed in pullback-based constructions—fails due to metric distortion. To address these issues, we propose a novel optimization framework grounded in isometric Riemannian geometry. Our approach introduces manifold-adapted definitions of monotonicity and Lipschitz continuity to mitigate theoretical breakdowns induced by mapping distortion. Integrating pullback metric modeling with customized gradient descent, we provide rigorous convergence guarantees. Experiments on synthetic data and benchmarks including MNIST demonstrate substantial improvements in clustering quality, enable interpretable barycenter computation, and achieve high-accuracy solutions to inverse problems—effectively suppressing geometric distortion artifacts.

Technology Category

Application Category

📝 Abstract
High-dimensional data that exhibit an intrinsic low-dimensional structure are ubiquitous in machine learning and data science. While various approaches allow for learning the corresponding data manifold from finite samples, performing downstream tasks such as optimization directly on these learned manifolds presents a significant challenge. This work introduces a principled framework for optimization on learned data manifolds using iso-Riemannian geometry. Our approach addresses key limitations of classical Riemannian optimization in this setting, specifically, that the Levi-Civita connection fails to yield constant-speed geodesics, and that geodesic convexity assumptions break down under the learned pullback constructions commonly used in practice. To overcome these challenges, we propose new notions of monotonicity and Lipschitz continuity tailored to the iso-Riemannian setting and propose iso-Riemannian descent algorithms for which we provide a detailed convergence analysis. We demonstrate the practical effectiveness of those algorithms on both synthetic and real datasets, including MNIST under a learned pullback structure. Our approach yields interpretable barycentres, improved clustering, and provably efficient solutions to inverse problems, even in high-dimensional settings. These results establish that optimization under iso-Riemannian geometry can overcome distortions inherent to learned manifold mappings.
Problem

Research questions and friction points this paper is trying to address.

Optimizing on learned data manifolds using iso-Riemannian geometry
Overcoming limitations of classical Riemannian optimization methods
Providing efficient solutions for inverse problems and clustering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iso-Riemannian geometry framework for manifold optimization
Tailored monotonicity and Lipschitz continuity notions
Iso-Riemannian descent algorithms with convergence guarantees
🔎 Similar Papers
No similar papers found.