Random feature approximation for general spectral methods

📅 2023-08-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the generalization performance of random feature methods under generalized spectral regularization—including explicit schemes (e.g., Tikhonov regularization) and implicit schemes (e.g., gradient descent and accelerated algorithms). Leveraging insights from neural tangent kernel (NTK) theory, it establishes optimal learning rates for the first time under non-RKHS-type regularity assumptions, thereby unifying the analysis of implicit and explicit regularization mechanisms. Methodologically, the work integrates random feature mappings, spectral analysis, source condition modeling, and rigorous generalization error bound derivation to obtain tight convergence rates for both neural networks and neural operators. Key contributions are: (1) extending optimal convergence guarantees beyond the RKHS framework to broader classes of spectral regularity; (2) providing a unified, tight, and improved theoretical bound applicable to diverse kernel-based algorithms; and (3) substantially enhancing computational efficiency and depth of generalization analysis for large-scale kernel methods.

Technology Category

Application Category

📝 Abstract
Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks. We analyze generalization properties for a large class of spectral regularization methods combined with random features, containing kernel methods with implicit regularization such as gradient descent or explicit methods like Tikhonov regularization. For our estimators we obtain optimal learning rates over regularity classes (even for classes that are not included in the reproducing kernel Hilbert space), which are defined through appropriate source conditions. This improves or completes previous results obtained in related settings for specific kernel algorithms.
Problem

Research questions and friction points this paper is trying to address.

Extends random feature analysis to spectral regularization techniques
Enables theoretical analysis of neural networks via NTK approach
Obtains optimal learning rates for diverse regularity classes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends random feature approximation to spectral regularization
Analyzes neural networks via Neural Tangent Kernel approach
Achieves optimal learning rates over regularity classes
🔎 Similar Papers
No similar papers found.