🤖 AI Summary
To address the neurobiologically implausible reliance of spiking neural network (SNN) training on global backpropagation, this paper proposes a supervised learning framework grounded in predictive coding theory. It is the first to deeply integrate predictive coding mechanisms with local Hebbian-style synaptic plasticity, enabling fully spike-based, event-driven, gradient-free end-to-end autonomous training. Crucially, the method eliminates error backpropagation entirely, relying solely on local feedforward–feedback connectivity and neuron- and synapse-level plasticity rules—thereby enhancing both neuroscientific plausibility and hardware efficiency. Evaluated on MNIST, N-MNIST, Caltech Face/Motorbike, and ETH-80 benchmarks, it achieves state-of-the-art accuracies of 98.1%, 98.5%, 99.25%, and 84.25%, respectively. This work establishes a novel brain-inspired computing paradigm that simultaneously delivers high performance and strong biological interpretability.
📝 Abstract
—Deemed as the third generation of neural network, event-driven Spiking Neural Networks(SNNs) combined with bio-plausible local learning rules make it promising to build low- power, neuromorphic hardware for SNNs. However, because of the non-linearity and discrete property of spiking neural networks, the training of SNN remains difficult and is still under discussion. Originated from gradient descent, backpropagation has achieved stunning success in multi-layer SNNs. Nevertheless, it is assumed to lack of biologically plausibility, while consuming relatively high computational resources. In this paper, we propose a novel learning algorithm inspired by predictive coding theory and show that it can perform supervised learning fully autonomously and successfully as the backpropagation, utilizing only local Hebbian plasticity. Furthermore, this method reaches favorable performance compared to the state-of-the-art multi- layer SNNs: test accuracy of 99.25% for Caltech Face/Motorbike dataset, 84.25% for ETH-80 dataset, 98.1% for the MNIST dataset and 98.5% for Neuromorphic dataset: the N-MNIST. Furthermore, our work provides a new perspective on how supervised learning algorithms directly implemented in spiking neural circuitry be conducted, which may give some new insights into Neuromorphological calculation in neuroscience.