π€ AI Summary
Gaussian processes (GPs) suffer from cubic computational complexity in large-scale, high-dimensional settings due to dense covariance matrix operations. To address this, we propose a matrix-free additive GP framework leveraging the non-uniform fast Fourier transform (NFFT). Our method exploits additive kernel structures to model low-order feature interactions and employs NFFT to achieve near-linear-time matrixβvector multiplication. We further design a hyperparameter-aware adaptive preconditioner that substantially accelerates conjugate gradient solvers and hyperparameter optimization. Evaluated on multiple real-world datasets, the approach achieves O(N log N) time complexity, delivers 3β5Γ faster training over standard GPs, and matches or exceeds their predictive accuracy. This work establishes an efficient, scalable paradigm for large-scale uncertainty quantification.
π Abstract
Gaussian processes (GPs) are crucial in machine learning for quantifying uncertainty in predictions. However, their associated covariance matrices, defined by kernel functions, are typically dense and large-scale, posing significant computational challenges. This paper introduces a matrix-free method that utilizes the Non-equispaced Fast Fourier Transform (NFFT) to achieve nearly linear complexity in the multiplication of kernel matrices and their derivatives with vectors for a predetermined accuracy level. To address high-dimensional problems, we propose an additive kernel approach. Each sub-kernel in this approach captures lower-order feature interactions, allowing for the efficient application of the NFFT method and potentially increasing accuracy across various real-world datasets. Additionally, we implement a preconditioning strategy that accelerates hyperparameter tuning, further improving the efficiency and effectiveness of GPs.