🤖 AI Summary
Existing spectral moment estimation methods for kernel integral operators under limited samples rely on the eigenvalue spectrum of the sample covariance matrix, suffering from systematic bias that scales with matrix dimensionality.
Method: We propose the first unbiased spectral moment estimator based on dynamic programming, integrating random matrix theory with kernel methods. Under an asymptotic regime where both input and feature dimensions grow to infinity, the estimator achieves exact recovery of arbitrary-order spectral moments using only finite samples.
Contribution/Results: We rigorously establish consistency for the RBF kernel and successfully characterize the geometric structure of learned representations in deep neural networks. Unlike prior approaches, our method eliminates sensitivity to sample size while ensuring theoretical rigor, strong robustness to dimensionality, and practical interpretability.
📝 Abstract
Analyzing the structure of sampled features from an input data distribution is challenging when constrained by limited measurements in both the number of inputs and features. Traditional approaches often rely on the eigenvalue spectrum of the sample covariance matrix derived from finite measurement matrices; however, these spectra are sensitive to the size of the measurement matrix, leading to biased insights. In this paper, we introduce a novel algorithm that provides unbiased estimates of the spectral moments of the kernel integral operator in the limit of infinite inputs and features from finitely sampled measurement matrices. Our method, based on dynamic programming, is efficient and capable of estimating the moments of the operator spectrum. We demonstrate the accuracy of our estimator on radial basis function (RBF) kernels, highlighting its consistency with the theoretical spectra. Furthermore, we showcase the practical utility and robustness of our method in understanding the geometry of learned representations in neural networks.