🤖 AI Summary
This work investigates the statistical behavior of kernel ridge regression (KRR) in the quadratic asymptotic regime where $n asymp d^2$. Addressing the lack of theoretical characterization of random kernel matrices in this nonstandard scaling, we first derive operator-norm approximation bounds for generalized inner-product kernel matrices. Leveraging the moment method, Wick’s formula, orthogonal polynomials, and resolvent analysis under correlation, we rigorously characterize their limiting spectral distribution and establish universality—convergence to the spectrum of a quadratic kernel matrix with a correction term. Building on this, we obtain exact asymptotic expressions for both training and generalization errors, unifying deterministic and random teacher models. Furthermore, under Gaussian moment-matching assumptions, our results hold for arbitrary data distributions. This provides the first rigorous theoretical foundation for high-dimensional kernel learning in the hyperparameter-sensitive regime.
📝 Abstract
Kernel ridge regression (KRR) is a popular class of machine learning models that has become an important tool for understanding deep learning. Much of the focus has been on studying the proportional asymptotic regime, $n asymp d$, where $n$ is the number of training samples and $d$ is the dimension of the dataset. In this regime, under certain conditions on the data distribution, the kernel random matrix involved in KRR exhibits behavior akin to that of a linear kernel. In this work, we extend the study of kernel regression to the quadratic asymptotic regime, where $n asymp d^2$. In this regime, we demonstrate that a broad class of inner-product kernels exhibit behavior similar to a quadratic kernel. Specifically, we establish an operator norm approximation bound for the difference between the original kernel random matrix and a quadratic kernel random matrix with additional correction terms compared to the Taylor expansion of the kernel functions. The approximation works for general data distributions under a Gaussian-moment-matching assumption with a covariance structure. This new approximation is utilized to obtain a limiting spectral distribution of the original kernel matrix and characterize the precise asymptotic training and generalization errors for KRR in the quadratic regime when $n/d^2$ converges to a non-zero constant. The generalization errors are obtained for both deterministic and random teacher models. Our proof techniques combine moment methods, Wick's formula, orthogonal polynomials, and resolvent analysis of random matrices with correlated entries.