Predicting kernel regression learning curves from only raw data statistics

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of predicting kernel regression learning curves solely from raw data statistics. We propose an analytical modeling framework grounded in the Hermite Eigenstructure Assumption (HEA), which enables exact inference of test risk as a function of sample size using only the empirical covariance matrix and a polynomial expansion of the target function. HEA provides, for the first time, analytic approximations of kernel eigenvalues and eigenfunctions for anisotropic real-world image data, uncovering a shared evolutionary mechanism between kernel learning and MLP feature learning in the Hermite polynomial basis. Integrating covariance analysis, target function decomposition, and kernel learning theory, our method achieves end-to-end learning curve prediction. Extensive validation on CIFAR-5m, SVHN, and ImageNet demonstrates high accuracy. Crucially, we empirically confirm that MLP feature learning strictly follows the hierarchical progression of Hermite polynomial orders predicted by HEA.

Technology Category

Application Category

📝 Abstract
We study kernel regression with common rotation-invariant kernels on real datasets including CIFAR-5m, SVHN, and ImageNet. We give a theoretical framework that predicts learning curves (test risk vs. sample size) from only two measurements: the empirical data covariance matrix and an empirical polynomial decomposition of the target function $f_*$. The key new idea is an analytical approximation of a kernel's eigenvalues and eigenfunctions with respect to an anisotropic data distribution. The eigenfunctions resemble Hermite polynomials of the data, so we call this approximation the Hermite eigenstructure ansatz (HEA). We prove the HEA for Gaussian data, but we find that real image data is often"Gaussian enough"for the HEA to hold well in practice, enabling us to predict learning curves by applying prior results relating kernel eigenstructure to test risk. Extending beyond kernel regression, we empirically find that MLPs in the feature-learning regime learn Hermite polynomials in the order predicted by the HEA. Our HEA framework is a proof of concept that an end-to-end theory of learning which maps dataset structure all the way to model performance is possible for nontrivial learning algorithms on real datasets.
Problem

Research questions and friction points this paper is trying to address.

Predicting kernel regression learning curves from raw data statistics
Developing analytical approximation for kernel eigenvalues and eigenfunctions
Extending framework to understand MLP learning behavior patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts learning curves using data covariance
Approximates kernel eigenvalues via Hermite eigenstructure
Extends framework to MLPs in feature-learning regime
🔎 Similar Papers
No similar papers found.