Turbocharging Gaussian Process Inference with Approximate Sketch-and-Project

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gaussian processes (GPs) are widely used in biostatistics, scientific machine learning, and Bayesian optimization; however, exact posterior inference requires solving linear systems of size $O(n^2)$, rendering them computationally prohibitive for large-scale datasets. To address this, we propose the first condition-number–free, interpretable spectral approximation algorithm for GP inference, rigorously proven to converge via determinantal point process theory. We further design Approximate Distributed Sketch-and-Project (ADASAP), a scalable framework integrating randomized projection, parallel computation, and deterministic sampling. Experiments demonstrate that ADASAP significantly outperforms conjugate gradient and coordinate descent methods across multiple benchmarks and ultra-large-scale Bayesian optimization tasks. Notably, it enables the first scalable GP inference on datasets exceeding 300 million observations—marking a breakthrough in GP scalability. Our method achieves both theoretical guarantees and practical efficiency without sacrificing interpretability or numerical robustness.

Technology Category

Application Category

📝 Abstract
Gaussian processes (GPs) play an essential role in biostatistics, scientific machine learning, and Bayesian optimization for their ability to provide probabilistic predictions and model uncertainty. However, GP inference struggles to scale to large datasets (which are common in modern applications), since it requires the solution of a linear system whose size scales quadratically with the number of samples in the dataset. We propose an approximate, distributed, accelerated sketch-and-project algorithm ($ exttt{ADASAP}$) for solving these linear systems, which improves scalability. We use the theory of determinantal point processes to show that the posterior mean induced by sketch-and-project rapidly converges to the true posterior mean. In particular, this yields the first efficient, condition number-free algorithm for estimating the posterior mean along the top spectral basis functions, showing that our approach is principled for GP inference. $ exttt{ADASAP}$ outperforms state-of-the-art solvers based on conjugate gradient and coordinate descent across several benchmark datasets and a large-scale Bayesian optimization task. Moreover, $ exttt{ADASAP}$ scales to a dataset with $>3 cdot 10^8$ samples, a feat which has not been accomplished in the literature.
Problem

Research questions and friction points this paper is trying to address.

Scalable Gaussian Process inference for large datasets
Efficient estimation of posterior mean in GPs
Distributed accelerated algorithm for linear systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed accelerated sketch-and-project algorithm
Determinantal point processes for convergence
Scalable to over 300 million samples
🔎 Similar Papers
No similar papers found.