π€ AI Summary
This work investigates the generalization performance of random feature methods under operator-valued kernels, with particular emphasis on the misspecified setting where the target function lies outside the associated reproducing kernel Hilbert space (RKHS). To this end, the authors develop a unified spectral regularization framework that encompasses both neural operators and neural networks within the neural tangent kernel (NTK) perspective for theoretical analysis. They extend random feature methods to operator-valued kernels for the first time and establish minimax optimal convergence rates in both well-specified and misspecified regimes. Key contributions include deriving optimal learning rates, quantifying the number of neurons required to achieve a prescribed accuracy, and strengthening the theoretical foundations of operator-valued kernel methods.
π Abstract
In this work, we investigate the generalization properties of random feature methods. Our analysis extends prior results for Tikhonov regularization to a broad class of spectral regularization techniques and further generalizes the setting to operator-valued kernels. This unified framework enables a rigorous theoretical analysis of neural operators and neural networks through the lens of the Neural Tangent Kernel (NTK). In particular, it allows us to establish optimal learning rates and provides a good understanding of how many neurons are required to achieve a given accuracy. Furthermore, we establish minimax rates in the well-specified case and also in the misspecified case, where the target is not contained in the reproducing kernel Hilbert space. These results sharpen and complete earlier findings for specific kernel algorithms.