Laplace Learning in Wasserstein Space

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the failure of the Euclidean space assumption in semi-supervised learning for high-dimensional data. Methodologically, it extends graph Laplacian learning from finite-dimensional Euclidean spaces to the infinite-dimensional Wasserstein space—its first such formulation. Leveraging the manifold assumption, it rigorously characterizes the Laplace–Beltrami operator on submanifolds of the Wasserstein space and establishes a theoretically consistent classification framework via variational convergence analysis of discrete graph p-Dirichlet energies. Its key contribution lies in the novel integration of Wasserstein geometry, p-Dirichlet energy modeling, and variational convergence theory—overcoming traditional graph-based learning’s reliance on linear or finite-dimensional spaces. Experiments on multiple benchmark datasets demonstrate that the method achieves both theoretical consistency and robust classification performance in high-dimensional settings, offering a new paradigm for semi-supervised learning in non-Euclidean, infinite-dimensional spaces.

Technology Category

Application Category

📝 Abstract
The manifold hypothesis posits that high-dimensional data typically resides on low-dimensional sub spaces. In this paper, we assume manifold hypothesis to investigate graph-based semi-supervised learning methods. In particular, we examine Laplace Learning in the Wasserstein space, extending the classical notion of graph-based semi-supervised learning algorithms from finite-dimensional Euclidean spaces to an infinite-dimensional setting. To achieve this, we prove variational convergence of a discrete graph p- Dirichlet energy to its continuum counterpart. In addition, we characterize the Laplace-Beltrami operator on asubmanifold of the Wasserstein space. Finally, we validate the proposed theoretical framework through numerical experiments conducted on benchmark datasets, demonstrating the consistency of our classification performance in high-dimensional settings.
Problem

Research questions and friction points this paper is trying to address.

Extending graph-based semi-supervised learning to infinite-dimensional Wasserstein space
Proving variational convergence of discrete graph energy to continuum
Characterizing Laplace-Beltrami operator on Wasserstein space submanifolds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Laplace Learning to infinite-dimensional Wasserstein space
Proves variational convergence of discrete graph energy
Characterizes Laplace-Beltrami operator on Wasserstein submanifold
🔎 Similar Papers
No similar papers found.
M
Mary Chriselda Antony Oliver
Department of Applied Mathematics and Theoretical Physics, The University of Cambridge, Wilberforce Rd, CB3 0WA, Cambridge, UK
M
Michael Roberts
Department of Applied Mathematics and Theoretical Physics, The University of Cambridge, Wilberforce Rd, CB3 0WA, Cambridge, UK
Carola-Bibiane Schönlieb
Carola-Bibiane Schönlieb
DAMTP, University of Cambridge
MathematicsComputer ScienceMedical imaginginverse problems
Matthew Thorpe
Matthew Thorpe
Associate Professor in Statistics, University of Warwick