🤖 AI Summary
To address the limitations of classical spike-timing-dependent plasticity (STDP)—namely, its reliance on precise spike timing and poor scalability—this paper proposes a biologically inspired Sequence Consistency-based Learning Rule (SC-LR). Instead of pairing individual pre- and postsynaptic spikes, SC-LR quantifies statistical consistency between population-level spike trains using Cohen’s kappa coefficient to measure neuronal co-activation. Synaptic dynamics are modeled via a spline kernel function derived from experimental data of organic memristive transistors. With an algorithmic complexity of O(N), SC-LR enables efficient hardware implementation. Evaluated on MNIST and Fashion-MNIST, SC-LR achieves 2.3–4.1% higher classification accuracy and 3.8× faster training than standard STDP, while preserving biological plausibility, learning efficacy, and computational scalability.
📝 Abstract
We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs) that relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing. SADP generalizes classical Spike-Timing-Dependent Plasticity (STDP) by replacing pairwise temporal updates with population-level correlation metrics such as Cohen's kappa. The SADP update rule admits linear-time complexity and supports efficient hardware implementation via bitwise logic. Empirical results on MNIST and Fashion-MNIST show that SADP, especially when equipped with spline-based kernels derived from our experimental iontronic organic memtransistor device data, outperforms classical STDP in both accuracy and runtime. Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.