A Spiking Neural Network Implementation of Gaussian Belief Propagation

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge of implementing biologically plausible distributed Bayesian inference in spiking neural networks (SNNs). We propose the first unified framework—within the leaky integrate-and-fire (LIF) neuron model—that neurally realizes the three canonical linear message operations (equality, addition, and multiplication) required for Gaussian belief propagation on factor graphs. By mapping Gaussian message passing onto spike-based encoding, propagation, and decoding processes, our model achieves biologically interpretable approximate inference. The method matches the accuracy of the standard sum-product algorithm in message updates and demonstrates empirical efficacy on both Kalman filtering (dynamic inference) and Bayesian linear regression (static inference). These results validate its applicability across diverse probabilistic inference tasks and bridge a critical theoretical gap between probabilistic reasoning and spiking neuromorphic computation.

Technology Category

Application Category

📝 Abstract
Bayesian inference offers a principled account of information processing in natural agents. However, it remains an open question how neural mechanisms perform their abstract operations. We investigate a hypothesis where a distributed form of Bayesian inference, namely message passing on factor graphs, is performed by a simulated network of leaky-integrate-and-fire neurons. Specifically, we perform Gaussian belief propagation by encoding messages that come into factor nodes as spike-based signals, propagating these signals through a spiking neural network (SNN) and decoding the spike-based signal back to an outgoing message. Three core linear operations, equality (branching), addition, and multiplication, are realized in networks of leaky integrate-and-fire models. Validation against the standard sum-product algorithm shows accurate message updates, while applications to Kalman filtering and Bayesian linear regression demonstrate the framework's potential for both static and dynamic inference tasks. Our results provide a step toward biologically grounded, neuromorphic implementations of probabilistic reasoning.
Problem

Research questions and friction points this paper is trying to address.

Implement Bayesian inference using spiking neural networks
Realize Gaussian belief propagation with leaky-integrate-and-fire neurons
Enable probabilistic reasoning for static and dynamic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking neural network implements Gaussian belief propagation
Leaky-integrate-and-fire neurons perform core linear operations
Message passing on factor graphs via spike-based signals
🔎 Similar Papers
No similar papers found.