Efficient Numerical Integration in Reproducing Kernel Hilbert Spaces via Leverage Scores Sampling

📅 2023-11-22
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of efficiently approximating function integrals in a reproducing kernel Hilbert space (RKHS) given only i.i.d. samples from the target distribution. We propose a subsampling strategy based on (approximate) leverage scores—the first application of leverage scores to RKHS numerical integration—which drastically reduces the number of function evaluations required. Theoretically, we prove that only (m = O(log n)) subsampled points suffice to preserve the optimal (n^{-1/2}) convergence rate, and our error bound adapts to the smoothness of the integrand, achieving minimax-optimal rates in Sobolev spaces. Empirically, the method significantly improves the accuracy–efficiency trade-off over random or greedy quadrature on real-world datasets. Our results directly enable scalable computation of maximum mean discrepancy (MMD) and facilitate the design of efficient kernel-based hypothesis tests.
📝 Abstract
In this work we consider the problem of numerical integration, i.e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand. We focus on the setting in which the target distribution is only accessible through a set of $n$ i.i.d. observations, and the integrand belongs to a reproducing kernel Hilbert space. We propose an efficient procedure which exploits a small i.i.d. random subset of $m<n$ samples drawn either uniformly or using approximate leverage scores from the initial observations. Our main result is an upper bound on the approximation error of this procedure for both sampling strategies. It yields sufficient conditions on the subsample size to recover the standard (optimal) $n^{-1/2}$ rate while reducing drastically the number of functions evaluations, and thus the overall computational cost. Moreover, we obtain rates with respect to the number $m$ of evaluations of the integrand which adapt to its smoothness, and match known optimal rates for instance for Sobolev spaces. We illustrate our theoretical findings with numerical experiments on real datasets, which highlight the attractive efficiency-accuracy tradeoff of our method compared to existing randomized and greedy quadrature methods. We note that, the problem of numerical integration in RKHS amounts to designing a discrete approximation of the kernel mean embedding of the target distribution. As a consequence, direct applications of our results also include the efficient computation of maximum mean discrepancies between distributions and the design of efficient kernel-based tests.
Problem

Research questions and friction points this paper is trying to address.

Efficient numerical integration in RKHS using leverage scores sampling
Approximating integrals with limited pointwise evaluations of integrand
Reducing computational cost while maintaining optimal approximation rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage scores sampling for efficient integration
Optimal rate recovery with reduced evaluations
Adaptive rates matching Sobolev space optimality
🔎 Similar Papers
No similar papers found.
A
Antoine Chatalic
MaLGa Center - DIBRIS - Universit`a di Genova, Genoa, Italy
Nicolas Schreuder
Nicolas Schreuder
CNRS, LIGM
StatisticsMachine Learning
E
E. Vito
MaLGa Center - DIMA - Universit`a di Genova, Genoa, Italy
L
L. Rosasco
CBMM - Massachusets Institute of Technology, Cambridge, MA, USA; Istituto Italiano di Tecnologia, Genoa, Italy