๐ค AI Summary
This study addresses the lack of theoretical guarantees for belief propagation (BP) in sparse, loopy factor graphs under non-Gaussian settings. By leveraging the central limit theorem, the authors analyze the statistical properties of BP message passing and prove that, under four reasonable assumptions, the marginal beliefs over variables converge to a Gaussian distribution as iterations proceed. This work provides the first theoretical convergence guarantee for Gaussian belief propagation (GBP) in non-Gaussian, sparse graphical models, uncovering an intrinsic โGaussianizationโ mechanism inherent to BP. Experimental validation on stereo vision depth estimation demonstrates that variable beliefs become markedly Gaussian after only a few iterations, thereby substantiating the empirical success and broad applicability of GBP in spatial AI and related domains.
๐ Abstract
Belief Propagation (BP) is a powerful algorithm for distributed inference in probabilistic graphical models, however it quickly becomes infeasible for practical compute and memory budgets. Many efficient, non-parametric forms of BP have been developed, but the most popular is Gaussian Belief Propagation (GBP), a variant that assumes all distributions are locally Gaussian. GBP is widely used due to its efficiency and empirically strong performance in applications like computer vision or sensor networks - even when modelling non-Gaussian problems. In this paper, we seek to provide a theoretical guarantee for when Gaussian approximations are valid in highly non-Gaussian, sparsely-connected factor graphs performing BP (common in spatial AI). We leverage the Central Limit Theorem (CLT) to prove mathematically that variables'beliefs under BP converge to a Gaussian distribution in complex, loopy factor graphs obeying our 4 key assumptions. We then confirm experimentally that variable beliefs become increasingly Gaussian after just a few BP iterations in a stereo depth estimation task.