StochEP: Stochastic Equilibrium Propagation for Spiking Convergent Recurrent Neural Networks

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing equilibrium propagation (EP) methods for spiking neural networks (SNNs) rely predominantly on deterministic neurons, limiting their ability to capture the intrinsic discontinuity of spiking dynamics and hindering scalability to complex visual tasks. This work proposes Stochastic EP—a novel framework that introduces stochasticity into EP to faithfully model spike generation. We integrate probabilistic spiking neurons with a convergent recurrent architecture, enabling scalable deep convolutional spiking recurrent networks. Theoretically, we prove that Stochastic EP converges to deterministic EP in the mean-field limit, thus preserving both biological plausibility and optimization stability. Experiments demonstrate that our method achieves performance comparable to backpropagation-through-time (BPTT)-trained SNNs and EP-trained artificial neural networks (ANNs) on visual benchmarks, while retaining fully local synaptic updates. This establishes a new paradigm for online learning on neuromorphic hardware.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) promise energy-efficient, sparse, biologically inspired computation. Training them with Backpropagation Through Time (BPTT) and surrogate gradients achieves strong performance but remains biologically implausible. Equilibrium Propagation (EP) provides a more local and biologically grounded alternative. However, existing EP frameworks, primarily based on deterministic neurons, either require complex mechanisms to handle discontinuities in spiking dynamics or fail to scale beyond simple visual tasks. Inspired by the stochastic nature of biological spiking mechanism and recent hardware trends, we propose a stochastic EP framework that integrates probabilistic spiking neurons into the EP paradigm. This formulation smoothens the optimization landscape, stabilizes training, and enables scalable learning in deep convolutional spiking convergent recurrent neural networks (CRNNs). We provide theoretical guarantees showing that the proposed stochastic EP dynamics approximate deterministic EP under mean-field theory, thereby inheriting its underlying theoretical guarantees. The proposed framework narrows the gap to both BPTT-trained SNNs and EP-trained non-spiking CRNNs in vision benchmarks while preserving locality, highlighting stochastic EP as a promising direction for neuromorphic and on-chip learning.
Problem

Research questions and friction points this paper is trying to address.

Develops stochastic Equilibrium Propagation for spiking neural networks
Addresses biological plausibility and scalability limitations in existing methods
Enables energy-efficient neuromorphic learning with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic EP integrates probabilistic spiking neurons
Framework smoothens optimization landscape and stabilizes training
Enables scalable learning in deep convolutional spiking CRNNs
🔎 Similar Papers
No similar papers found.
J
Jiaqi Lin
School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
Y
Yi Jiang
School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
Abhronil Sengupta
Abhronil Sengupta
Monkowski Career Development Associate Professor of EECS, Penn State University
Neuromorphic Computing