๐ค AI Summary
In cluttered environments, analytical computation of the ELBO gradient for variational inference in Bayesian networks is intractable. To address this, we propose an analytical approximation method leveraging reparameterization and a local compact-support likelihood assumption, deriving for the first time a closed-form expression for the ELBO gradient under Gaussian mixture noiseโeliminating Monte Carlo sampling entirely. This analytical gradient is integrated into the EM framework, yielding a novel variational learning paradigm that combines theoretical guarantees with computational efficiency. Experiments demonstrate that our method achieves accuracy comparable to Laplace approximation and expectation propagation, converges faster than mean-field variational inference (MFVI), and incurs only linear gradient-computation complexity, O(N), thereby significantly enhancing scalability and efficiency for large-scale clutter inference.
๐ Abstract
We propose an analytical solution for approximating the gradient of the Evidence Lower Bound (ELBO) in variational inference problems where the statistical model is a Bayesian network consisting of observations drawn from a mixture of a Gaussian distribution embedded in unrelated clutter, known as the clutter problem. The method employs the reparameterization trick to move the gradient operator inside the expectation and relies on the assumption that, because the likelihood factorizes over the observed data, the variational distribution is generally more compactly supported than the Gaussian distribution in the likelihood factors. This allows efficient local approximation of the individual likelihood factors, which leads to an analytical solution for the integral defining the gradient expectation. We integrate the proposed gradient approximation as the expectation step in an EM (Expectation Maximization) algorithm for maximizing ELBO and test against classical deterministic approaches in Bayesian inference, such as the Laplace approximation, Expectation Propagation and Mean-Field Variational Inference. The proposed method demonstrates good accuracy and rate of convergence together with linear computational complexity.