🤖 AI Summary
This work addresses the absence of large-sample limit theory for Laplace learning under infinite-dimensional Gaussian measures by investigating the asymptotic behavior of semi-supervised Laplace methods on data drawn from Gaussian measures in Hilbert spaces. By constructing Dirichlet energy on graphs and analyzing its pointwise convergence, the study overcomes the fundamental obstacle posed by the lack of a Lebesgue measure in infinite dimensions. This effort yields the first extension of the large-sample limit theory for Laplace learning to the setting of infinite-dimensional Gaussian measures. The results establish a rigorous theoretical bridge between discrete graph-based models and their continuous counterparts, proving that the graph Dirichlet energy converges pointwise to the corresponding continuum functional as the sample size grows.
📝 Abstract
Laplace learning is a semi-supervised method, a solution for finding missing labels from a partially labeled dataset utilizing the geometry given by the unlabeled data points. The method minimizes a Dirichlet energy defined on a (discrete) graph constructed from the full dataset. In finite dimensions the asymptotics in the large (unlabeled) data limit are well understood with convergence from the graph setting to a continuum Sobolev semi-norm weighted by the Lebesgue density of the data-generating measure. The lack of the Lebesgue measure on infinite-dimensional spaces requires rethinking the analysis if the data aren't finite-dimensional. In this paper we make a first step in this direction by analyzing the setting when the data are generated by a Gaussian measure on a Hilbert space and proving pointwise convergence of the graph Dirichlet energy.