🤖 AI Summary
This paper addresses the problem of efficiently approximating function integrals in a reproducing kernel Hilbert space (RKHS) given only i.i.d. samples from the target distribution. We propose a subsampling strategy based on (approximate) leverage scores—the first application of leverage scores to RKHS numerical integration—which drastically reduces the number of function evaluations required. Theoretically, we prove that only (m = O(log n)) subsampled points suffice to preserve the optimal (n^{-1/2}) convergence rate, and our error bound adapts to the smoothness of the integrand, achieving minimax-optimal rates in Sobolev spaces. Empirically, the method significantly improves the accuracy–efficiency trade-off over random or greedy quadrature on real-world datasets. Our results directly enable scalable computation of maximum mean discrepancy (MMD) and facilitate the design of efficient kernel-based hypothesis tests.
📝 Abstract
In this work we consider the problem of numerical integration, i.e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand. We focus on the setting in which the target distribution is only accessible through a set of $n$ i.i.d. observations, and the integrand belongs to a reproducing kernel Hilbert space. We propose an efficient procedure which exploits a small i.i.d. random subset of $m<n$ samples drawn either uniformly or using approximate leverage scores from the initial observations. Our main result is an upper bound on the approximation error of this procedure for both sampling strategies. It yields sufficient conditions on the subsample size to recover the standard (optimal) $n^{-1/2}$ rate while reducing drastically the number of functions evaluations, and thus the overall computational cost. Moreover, we obtain rates with respect to the number $m$ of evaluations of the integrand which adapt to its smoothness, and match known optimal rates for instance for Sobolev spaces. We illustrate our theoretical findings with numerical experiments on real datasets, which highlight the attractive efficiency-accuracy tradeoff of our method compared to existing randomized and greedy quadrature methods. We note that, the problem of numerical integration in RKHS amounts to designing a discrete approximation of the kernel mean embedding of the target distribution. As a consequence, direct applications of our results also include the efficient computation of maximum mean discrepancies between distributions and the design of efficient kernel-based tests.