Diffusion Bridge Variational Inference for Deep Gaussian Processes

๐Ÿ“… 2025-09-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep Gaussian processes (DGPs) suffer from challenging posterior inference over inducing variables, and existing denoising diffusion variational inference (DDVI) employs a fixed, data-agnostic Gaussian initial distribution, leading to inefficient reverse diffusion paths and slow convergence. Method: We propose Diffusion Bridge Variational Inference (DBVI), which reconstructs the prior via a Doob h-transformed bridge process, introduces a learnable, data-dependent initial distribution, and jointly amortizes inference over both inducing variables and inputs. DBVI optimizes a Girsanov-corrected evidence lower bound (ELBO), leverages time-reversed stochastic differential equations (SDEs), and employs neural network parameterization for flexible posterior approximation. Contribution/Results: Experiments demonstrate that DBVI consistently outperforms DDVI and other variational baselines across regression, classification, and image reconstruction tasksโ€”achieving higher predictive accuracy, faster convergence, and improved posterior calibration.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep Gaussian processes (DGPs) enable expressive hierarchical Bayesian modeling but pose substantial challenges for posterior inference, especially over inducing variables. Denoising diffusion variational inference (DDVI) addresses this by modeling the posterior as a time-reversed diffusion from a simple Gaussian prior. However, DDVI's fixed unconditional starting distribution remains far from the complex true posterior, resulting in inefficient inference trajectories and slow convergence. In this work, we propose Diffusion Bridge Variational Inference (DBVI), a principled extension of DDVI that initiates the reverse diffusion from a learnable, data-dependent initial distribution. This initialization is parameterized via an amortized neural network and progressively adapted using gradients from the ELBO objective, reducing the posterior gap and improving sample efficiency. To enable scalable amortization, we design the network to operate on the inducing inputs, which serve as structured, low-dimensional summaries of the dataset and naturally align with the inducing variables' shape. DBVI retains the mathematical elegance of DDVI, including Girsanov-based ELBOs and reverse-time SDEs,while reinterpreting the prior via a Doob-bridged diffusion process. We derive a tractable training objective under this formulation and implement DBVI for scalable inference in large-scale DGPs. Across regression, classification, and image reconstruction tasks, DBVI consistently outperforms DDVI and other variational baselines in predictive accuracy, convergence speed, and posterior quality.
Problem

Research questions and friction points this paper is trying to address.

Improving posterior inference efficiency for deep Gaussian processes
Addressing slow convergence in diffusion variational inference methods
Reducing the gap between initialization and complex true posterior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable data-dependent initial distribution
Amortized neural network parameterization via inducing inputs
Doob-bridged diffusion process with tractable ELBO
๐Ÿ”Ž Similar Papers
No similar papers found.