Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling

๐Ÿ“… 2025-05-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the computational intractability of exact Gaussian process (GP) inference in large-scale GP variational autoencoders (GPVAEs), this paper proposes a neighborhood-driven local approximation method: for each data point, GP inference is performed only over its *k*-nearest neighbors in the latent spaceโ€”bypassing global kernel assumptions and eliminating the need for large sets of inducing points. This is the first work to incorporate neighborhood locality into the GPVAE variational inference framework, enabling more flexible kernel design while preserving essential latent variable dependencies. Experiments demonstrate substantial improvements over existing GPVAE baselines across representation learning, missing data imputation, and conditional generation tasks: prediction accuracy increases significantly, training speed accelerates by 3โ€“5ร—, and memory consumption decreases by approximately 40%.

Technology Category

Application Category

๐Ÿ“ Abstract
Gaussian Process (GP) Variational Autoencoders (VAEs) extend standard VAEs by replacing the fully factorised Gaussian prior with a GP prior, thereby capturing richer correlations among latent variables. However, performing exact GP inference in large-scale GPVAEs is computationally prohibitive, often forcing existing approaches to rely on restrictive kernel assumptions or large sets of inducing points. In this work, we propose a neighbour-driven approximation strategy that exploits local adjacencies in the latent space to achieve scalable GPVAE inference. By confining computations to the nearest neighbours of each data point, our method preserves essential latent dependencies, allowing more flexible kernel choices and mitigating the need for numerous inducing points. Through extensive experiments on tasks including representation learning, data imputation, and conditional generation, we demonstrate that our approach outperforms other GPVAE variants in both predictive performance and computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Scalable inference in Gaussian Process VAEs
Reducing computational cost of GPVAE models
Preserving latent dependencies with neighbor-driven approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neighbour-driven approximation for scalable GPVAE inference
Local adjacencies preserve latent dependencies
Flexible kernel choices reduce inducing points need
๐Ÿ”Ž Similar Papers
No similar papers found.