Optimal minimax rate of learning nonlocal interaction kernels

๐Ÿ“… 2023-11-28
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the nonparametric estimation of radial interaction kernels in nonlocal interacting particle systems, aiming to determine whether the minimax convergence rate matches that of classical nonparametric regression. To tackle statistical challenges arising from nonlocal dependence in finite samples, we propose a truncated least-squares estimator (tLSE). We establish, for the first time, that when the kernel belongs to a Sobolev space with regularity index $eta geq 1/4$, the optimal minimax rate is $M^{-2eta/(2eta+1)}$, identical to the classical regression rate. Our analysis integrates tools from random matrix theory, Sobolev embedding, fourth-moment control, non-asymptotic lower bounds on the smallest singular value of normal matrices, and the Fanoโ€“Tsybakov information-theoretic method. This result unifies the optimality analysis framework for both local and nonlocal dependency models and provides a rigorous foundation for nonparametric inference in high-dimensional interacting systems.
๐Ÿ“ Abstract
Nonparametric estimation of nonlocal interaction kernels is crucial in various applications involving interacting particle systems. The inference challenge, situated at the nexus of statistical learning and inverse problems, arises from the nonlocal dependency. A central question is whether the optimal minimax rate of convergence for this problem aligns with the rate of $M^{-frac{2eta}{2eta+1}}$ in classical nonparametric regression, where $M$ is the sample size and $eta$ represents the regularity index of the radial kernel. Our study confirms this alignment for systems with a finite number of particles. We introduce a tamed least squares estimator (tLSE) that achieves the optimal convergence rate when $etageq 1/4$ for a broad class of exchangeable distributions by leveraging random matrix theory and Sobolev embedding. The upper minimax rate relies on fourth-moment bounds for normal vectors and nonasymptotic bounds for the left tail probability of the smallest eigenvalue of the normal matrix. The lower minimax rate is derived using the Fano-Tsybakov hypothesis testing method. Our tLSE method offers a straightforward approach for establishing the optimal minimax rate for models with either local or nonlocal dependency.
Problem

Research questions and friction points this paper is trying to address.

Estimating nonlocal interaction kernels in particle systems
Determining optimal minimax convergence rates for kernel learning
Developing tLSE method for nonparametric regression with dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tamed least squares estimator for optimal rate
Leverages random matrix theory and Sobolev embedding
Uses Fano-Tsybakov method for lower minimax rate
๐Ÿ”Ž Similar Papers
No similar papers found.