Bridge the Inference Gaps of Neural Processes via Expectation Maximization

📅 2025-01-04
🏛️ International Conference on Learning Representations
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
Neural Processes (NPs) suffer from underfitting in modeling function distributions due to suboptimal inference, limiting their performance in regression and image completion tasks. This work identifies a systematic bias in standard NPs when maximizing the log-likelihood of the meta-dataset and proposes a surrogate objective grounded in the Expectation-Maximization (EM) framework. We further introduce Self-Normalized Importance-weighted Neural Processes (SI-NP), the first NP variant with a theoretical guarantee of improved log-likelihood estimation. SI-NP integrates importance-weighted variational inference with an attention-enhanced NP architecture. Extensive experiments demonstrate significant improvements over existing NP methods across multiple function regression and image completion benchmarks, achieving state-of-the-art performance. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The neural process (NP) is a family of computationally efficient models for learning distributions over functions. However, it suffers from under-fitting and shows suboptimal performance in practice. Researchers have primarily focused on incorporating diverse structural inductive biases, extit{e.g.} attention or convolution, in modeling. The topic of inference suboptimality and an analysis of the NP from the optimization objective perspective has hardly been studied in earlier work. To fix this issue, we propose a surrogate objective of the target log-likelihood of the meta dataset within the expectation maximization framework. The resulting model, referred to as the Self-normalized Importance weighted Neural Process (SI-NP), can learn a more accurate functional prior and has an improvement guarantee concerning the target log-likelihood. Experimental results show the competitive performance of SI-NP over other NPs objectives and illustrate that structural inductive biases, such as attention modules, can also augment our method to achieve SOTA performance. Our code is available at url{https://github.com/hhq123gogogo/SI_NPs}.
Problem

Research questions and friction points this paper is trying to address.

Neural Processes
Inference Optimization
Function Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Normalizing Importance Weighting
Neural Processes
Meta-learning
Q. Wang
Q. Wang
Department of Radiology, Southwest Hospital, Army Medical University
Medical Image AnalysisDeep Learning
M
M. Federici
AMLab, University of Amsterdam, 1098XH, Amsterdam, the Netherlands
H
H. V. Hoof
AMLab, University of Amsterdam, 1098XH, Amsterdam, the Netherlands