Riemannian AmbientFlow: Towards Simultaneous Manifold Learning and Generative Modeling from Corrupted Data

πŸ“… 2026-01-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes the Riemannian AmbientFlow framework to jointly recover the underlying data manifold structure and a generative model from noisy or linearly degraded observations. By integrating Riemannian geometry with AmbientFlow for the first time, the method leverages a data-driven Riemannian metric induced by normalizing flows within a variational inference framework. It simultaneously learns a nonlinear manifold and a generative model through the pullback metric and a Riemannian autoencoder. Theoretical analysis provides guarantees for manifold recovery and establishes a bi-Lipschitz parameterization. Experiments on synthetic low-dimensional manifolds and MNIST demonstrate the approach’s effectiveness, showing that the learned decoder serves as a generative prior with provable recovery guarantees for solving inverse problems.

Technology Category

Application Category

πŸ“ Abstract
Modern generative modeling methods have demonstrated strong performance in learning complex data distributions from clean samples. In many scientific and imaging applications, however, clean samples are unavailable, and only noisy or linearly corrupted measurements can be observed. Moreover, latent structures, such as manifold geometries, present in the data are important to extract for further downstream scientific analysis. In this work, we introduce Riemannian AmbientFlow, a framework for simultaneously learning a probabilistic generative model and the underlying, nonlinear data manifold directly from corrupted observations. Building on the variational inference framework of AmbientFlow, our approach incorporates data-driven Riemannian geometry induced by normalizing flows, enabling the extraction of manifold structure through pullback metrics and Riemannian Autoencoders. We establish theoretical guarantees showing that, under appropriate geometric regularization and measurement conditions, the learned model recovers the underlying data distribution up to a controllable error and yields a smooth, bi-Lipschitz manifold parametrization. We further show that the resulting smooth decoder can serve as a principled generative prior for inverse problems with recovery guarantees. We empirically validate our approach on low-dimensional synthetic manifolds and on MNIST.
Problem

Research questions and friction points this paper is trying to address.

manifold learning
generative modeling
corrupted data
Riemannian geometry
inverse problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian geometry
normalizing flows
manifold learning
generative modeling
corrupted data
πŸ”Ž Similar Papers
No similar papers found.