π€ AI Summary
This work addresses the challenge in kernel gradient descent algorithms where parameter selection relies heavily on cross-validation and lacks theoretical guarantees. To overcome this limitation, the authors propose an adaptive parameter selection strategy grounded in bias-variance decomposition and data splitting. By introducing an empirical effective dimension to quantify iterative increments, the method automatically adapts to diverse kernel functions, target functions, and error metrics. Within the framework of integral operators and statistical learning theory, the approach establishes optimal generalization error bounds. The proposed strategy is not only theoretically implementable but also achieves the minimax optimal convergence rate, offering a significant improvement over existing parameter selection methods in both theoretical rigor and practical performance.
π Abstract
This paper proposes a novel parameter selection strategy for kernel-based gradient descent (KGD) algorithms, integrating bias-variance analysis with the splitting method. We introduce the concept of empirical effective dimension to quantify iteration increments in KGD, deriving an adaptive parameter selection strategy that is implementable. Theoretical verifications are provided within the framework of learning theory. Utilizing the recently developed integral operator approach, we rigorously demonstrate that KGD, equipped with the proposed adaptive parameter selection strategy, achieves the optimal generalization error bound and adapts effectively to different kernels, target functions, and error metrics. Consequently, this strategy showcases significant advantages over existing parameter selection methods for KGD.