🤖 AI Summary
This work proposes a supervised Synaptic Adaptation based on Discrepancy of Populations (SADP) learning rule that overcomes the limitations of traditional spike-timing-dependent plasticity (STDP), which relies on precise spike timing and pairwise updates and struggles to support efficient supervised learning. SADP introduces, for the first time, a population-level spike consistency metric—such as Cohen’s kappa—into the local synaptic update mechanism of spiking neural networks, eliminating the need for backpropagation, surrogate gradients, or teacher forcing. While preserving biological plausibility and hardware compatibility, the method enables efficient supervised training. Integrated with a hybrid CNN-SNN architecture and Poisson encoding, SADP achieves rapid convergence, strong performance, and remarkable hyperparameter robustness across MNIST, Fashion-MNIST, CIFAR-10, and biomedical image tasks, with a synaptic update mechanism exhibiting linear time complexity.
📝 Abstract
Spike-Timing-Dependent Plasticity (STDP) provides a biologically grounded learning rule for spiking neural networks (SNNs), but its reliance on precise spike timing and pairwise updates limits fast learning of weights. We introduce a supervised extension of Spike Agreement-Dependent Plasticity (SADP), which replaces pairwise spike-timing comparisons with population-level agreement metrics such as Cohen's kappa. The proposed learning rule preserves strict synaptic locality, admits linear-time complexity, and enables efficient supervised learning without backpropagation, surrogate gradients, or teacher forcing. We integrate supervised SADP within hybrid CNN-SNN architectures, where convolutional encoders provide compact feature representations that are converted into Poisson spike trains for agreement-driven learning in the SNN. Extensive experiments on MNIST, Fashion-MNIST, CIFAR-10, and biomedical image classification tasks demonstrate competitive performance and fast convergence. Additional analyses show stable performance across broad hyperparameter ranges and compatibility with device-inspired synaptic update dynamics. Together, these results establish supervised SADP as a scalable, biologically grounded, and hardware-aligned learning paradigm for spiking neural networks.