Training of Spiking Neural Networks with Expectation-Propagation

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses three key challenges in spiking neural networks (SNNs): (1) inefficient learning of parameter marginal distributions, (2) lack of a unified framework for handling both discrete/continuous weights and deterministic/stochastic dynamics, and (3) sensitivity to nuisance parameters—e.g., hidden-layer outputs—that hinder robust training. To this end, we propose an expectation propagation (EP)-based message-passing training framework that bypasses gradient computation entirely. Leveraging Bayesian inference, our method analytically marginalizes nuisance variables, enabling the first gradient-free, batch-level unified training of both deterministic and stochastic SNNs. Compared with conventional gradient-based approaches, it substantially reduces iteration counts and accelerates convergence. Empirical evaluations demonstrate superior performance on benchmark classification and regression tasks. Our framework establishes a scalable, robust, and principled training paradigm for deep Bayesian SNNs.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a unifying message-passing framework for training spiking neural networks (SNNs) using Expectation-Propagation. Our gradient-free method is capable of learning the marginal distributions of network parameters and simultaneously marginalizes nuisance parameters, such as the outputs of hidden layers. This framework allows for the first time, training of discrete and continuous weights, for deterministic and stochastic spiking networks, using batches of training samples. Although its convergence is not ensured, the algorithm converges in practice faster than gradient-based methods, without requiring a large number of passes through the training data. The classification and regression results presented pave the way for new efficient training methods for deep Bayesian networks.
Problem

Research questions and friction points this paper is trying to address.

Training spiking neural networks without gradients
Learning marginal distributions of network parameters
Handling discrete and continuous weights simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expectation-Propagation for SNN training
Gradient-free parameter marginalization
Handles discrete and continuous weights
🔎 Similar Papers