Sparse Gaussian Neural Processes

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of poor model interpretability in probabilistic meta-learning and the high computational cost (O(N³)) of Gaussian processes (GPs), this paper proposes an interpretable and scalable sparse GP meta-learning framework. Methodologically, we introduce, for the first time, a sparse GP—built upon inducing points and variational inference—into the neural process architecture, enabling users to explicitly inject domain-specific prior knowledge (e.g., task structure or expert constraints) and reducing time complexity to O(NM²). Our contributions are threefold: (1) overcoming the limitation of uncontrollable implicit priors in neural processes by supporting plug-and-play human-specified priors; (2) significantly improving Bayesian predictive reliability and interpretability in few-shot learning settings; and (3) drastically reducing computational overhead compared to standard GP-based meta-learning, while preserving accuracy, efficiency, and transparency.

Technology Category

Application Category

📝 Abstract
Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.
Problem

Research questions and friction points this paper is trying to address.

Enables rapid prediction with sparse Gaussian processes
Provides interpretable priors in neural processes
Reduces computational cost of Gaussian process inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learns sparse Gaussian process inference
Enables rapid prediction with sparse GPs
Allows manual elicitation of neural process priors
🔎 Similar Papers
No similar papers found.