π€ AI Summary
This work addresses the computational inefficiency of Gaussian processes when trained on medium-scale datasets, where exact inference suffers from high time and memory complexity. To overcome this challenge, the authors propose the Projected Likelihood (PL) method, which constructs an efficient training objective by projecting the data onto a low-dimensional linear subspace. The paper introduces the PL objective for the first time and derives a closed-form expression quantifying its associated information loss. By employing random projections drawn uniformly from the unit sphere, the method effectively controls this loss while maintaining computational tractability. Empirical evaluations across multiple medium-scale benchmark datasets demonstrate that PL consistently outperforms both exact Gaussian process inference and sparse variational free energy approximations in terms of predictive accuracy and computational efficiency.
π Abstract
We propose a novel training objective for GPs constructed using lower-dimensional linear projections of the data, referred to as \emph{projected likelihood} (PL). We provide a closed-form expression for the information loss related to the PL and empirically show that it can be reduced with random projections on the unit sphere. We show the superiority of the PL, in terms of accuracy and computational efficiency, over the exact GP training and the variational free energy approach to sparse GPs over different optimisers, kernels and datasets of moderately large sizes.