ADMM-Based Training for Spiking Neural Networks

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Spike-based neural networks (SNNs) suffer from training difficulties due to the non-differentiability of step-like activation functions, while existing surrogate gradient methods exhibit numerical instability and poor scalability. To address this, this work introduces the Alternating Direction Method of Multipliers (ADMM) into end-to-end SNN training for the first time. By reformulating the optimization problem and deriving closed-form parameter update rules, our approach explicitly handles the non-smooth activation constraints without relying on gradient approximations. Experiments demonstrate stable convergence, significantly improved training robustness, and superior energy efficiency across multiple temporal tasks—outperforming mainstream SGD-based baselines. This work establishes a gradient-free optimization paradigm for SNNs, enabling low-power, scalable neuromorphic learning.

Technology Category

Application Category

📝 Abstract
In recent years, spiking neural networks (SNNs) have gained momentum due to their high potential in time-series processing combined with minimal energy consumption. However, they still lack a dedicated and efficient training algorithm. The popular backpropagation with surrogate gradients, adapted from stochastic gradient descent (SGD)-derived algorithms, has several drawbacks when used as an optimizer for SNNs. Specifically, it suffers from low scalability and numerical imprecision. In this paper, we propose a novel SNN training method based on the alternating direction method of multipliers (ADMM). Our ADMM-based training aims to solve the problem of the SNN step function's non-differentiability. We formulate the problem, derive closed-form updates, and empirically show the optimizer's convergence properties, great potential, and possible new research directions to improve the method in a simulated proof-of-concept.
Problem

Research questions and friction points this paper is trying to address.

Lack of efficient training algorithm for SNNs
Backpropagation suffers from scalability and imprecision
ADMM-based training addresses non-differentiability of SNN step function
Innovation

Methods, ideas, or system contributions that make the work stand out.

ADMM-based training for SNNs
Solves non-differentiability of step function
Closed-form updates for convergence
🔎 Similar Papers
No similar papers found.
G
Giovanni Perin
Department of Information Engineering, University of Brescia, Brescia, Italy
C
Cesare Bidini
Department of Mathematics “Tullio Levi-Civita”, University of Padova, Padova, Italy
R
Riccardo Mazzieri
Department of Information Engineering, University of Padova, Padova, Italy
Michele Rossi
Michele Rossi
Dept. of Information Engineering - University of Padova, Italy
edge computinggreen mobile networkswireless sensingmachine learningoptimization