Tighter sparse variational Gaussian processes

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse Variational Gaussian Processes (SVGP) suffer from a loose evidence lower bound (ELBO) and limited modeling accuracy under large-scale data due to the restrictive assumption that the posterior over inducing points must match the prior. Method: We propose a theoretically tighter variational approximation framework that relaxes this assumption, enabling decoupled posterior variances for inducing points—strictly proving and realizing a collapsed ELBO where posterior variances are no longer tied to the prior. We introduce an orthogonal-structured inducing point design, unifying treatment across regression, classification, and GP latent variable models. Contribution/Results: Experiments demonstrate significant performance gains over standard SVGP across diverse tasks, with improved predictive accuracy and calibrated uncertainty—achieved without increasing computational overhead—validating that tighter ELBOs directly translate into enhanced model fidelity.

Technology Category

Application Category

📝 Abstract
Sparse variational Gaussian process (GP) approximations based on inducing points have become the de facto standard for scaling GPs to large datasets, owing to their theoretical elegance, computational efficiency, and ease of implementation. This paper introduces a provably tighter variational approximation by relaxing the standard assumption that the conditional approximate posterior given the inducing points must match that in the prior. The key innovation is to modify the conditional posterior to have smaller variances than that of the prior at the training points. We derive the collapsed bound for the regression case, describe how to use the proposed approximation in large data settings, and discuss its application to handle orthogonally structured inducing points and GP latent variable models. Extensive experiments on regression benchmarks, classification, and latent variable models demonstrate that the proposed approximation consistently matches or outperforms standard sparse variational GPs while maintaining the same computational cost. An implementation will be made available in all popular GP packages.
Problem

Research questions and friction points this paper is trying to address.

Improve sparse variational Gaussian processes
Enhance computational efficiency
Optimize conditional posterior variances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tighter variational Gaussian processes
Modified conditional posterior variances
Computationally efficient regression benchmarks
🔎 Similar Papers
No similar papers found.
T
T. Bui
School of Computing, Australian National University
Matthew Ashman
Matthew Ashman
PhD Student, University of Cambridge
machine learningBayesian inference
R
Richard E. Turner
Department of Engineering, University of Cambridge