Backpropagation-free Spiking Neural Networks with the Forward-Forward Algorithm

📅 2025-02-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional backpropagation (BP) for training spiking neural networks (SNNs) suffers from computational inefficiency and biological implausibility. To address these limitations, this work introduces the Forward-Forward (FF) algorithm—previously applied to artificial neural networks—into SNN training for the first time, establishing a gradient-free, purely feedforward local learning paradigm. The method employs leaky integrate-and-fire (LIF) neurons within an event-driven training framework, and is rigorously evaluated across static (MNIST, Neuro-MNIST) and neuromorphic spiking (SHD) benchmarks. Results demonstrate that our FF-SNN achieves superior accuracy on MNIST compared to prior FF-based SNNs while using fewer parameters; on SHD, it significantly outperforms most existing SNNs and matches the performance of state-of-the-art BP-trained models. This work establishes a novel, biologically plausible, and hardware-efficient end-to-end spiking learning framework that eliminates backward computation and global error signals.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) offer a biologically inspired computational paradigm that emulates neuronal activity through discrete spike-based processing. Despite their advantages, training SNNs with traditional backpropagation (BP) remains challenging due to computational inefficiencies and a lack of biological plausibility. This study explores the Forward-Forward (FF) algorithm as an alternative learning framework for SNNs. Unlike backpropagation, which relies on forward and backward passes, the FF algorithm employs two forward passes, enabling layer-wise localized learning, enhanced computational efficiency, and improved compatibility with neuromorphic hardware. We introduce an FF-based SNN training framework and evaluate its performance across both non-spiking (MNIST, Fashion-MNIST, Kuzushiji-MNIST) and spiking (Neuro-MNIST, SHD) datasets. Experimental results demonstrate that our model surpasses existing FF-based SNNs on evaluated static datasets with a much lighter architecture while achieving accuracy comparable to state-of-the-art backpropagation-trained SNNs. On more complex spiking tasks such as SHD, our approach outperforms other SNN models and remains competitive with leading backpropagation-trained SNNs. These findings highlight the FF algorithm's potential to advance SNN training methodologies by addressing some key limitations of backpropagation.
Problem

Research questions and friction points this paper is trying to address.

Replacing backpropagation in SNNs with Forward-Forward algorithm
Enhancing computational efficiency and biological plausibility in SNNs
Achieving competitive accuracy without backpropagation in spiking tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward-Forward algorithm replaces backpropagation
Layer-wise localized learning enhances efficiency
Improved compatibility with neuromorphic hardware
🔎 Similar Papers
No similar papers found.