🤖 AI Summary
This paper addresses deterministic approximation of a target probability distribution by a finite set of points, focusing on kernel-based numerical integration (cubature) via the Maximum Mean Discrepancy (MMD). To overcome the computational intractability of global MMD minimization, we propose instead targeting *MMD stationary points*. We establish, for the first time, their *super-convergence* property in the RKHS: the cubature error decays faster than the MMD itself. We derive the first non-asymptotic, finite-particle upper bound on the gradient flow error and design a practical discrete gradient flow algorithm, accompanied by rigorous theoretical guarantees. Our approach significantly improves accuracy and stability of high-dimensional numerical integration, offering a novel paradigm for quadrature, data compression, and optimization.
📝 Abstract
Approximation of a target probability distribution using a finite set of points is a problem of fundamental importance, arising in cubature, data compression, and optimisation. Several authors have proposed to select points by minimising a maximum mean discrepancy (MMD), but the non-convexity of this objective precludes global minimisation in general. Instead, we consider emph{stationary} points of the MMD which, in contrast to points globally minimising the MMD, can be accurately computed. Our main theoretical contribution is the (perhaps surprising) result that, for integrands in the associated reproducing kernel Hilbert space, the cubature error of stationary MMD points vanishes emph{faster} than the MMD. Motivated by this emph{super-convergence} property, we consider discretised gradient flows as a practical strategy for computing stationary points of the MMD, presenting a refined convergence analysis that establishes a novel non-asymptotic finite-particle error bound, which may be of independent interest.