Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of classical spike-timing-dependent plasticity (STDP)—namely, its reliance on precise spike timing and poor scalability—this paper proposes a biologically inspired Sequence Consistency-based Learning Rule (SC-LR). Instead of pairing individual pre- and postsynaptic spikes, SC-LR quantifies statistical consistency between population-level spike trains using Cohen’s kappa coefficient to measure neuronal co-activation. Synaptic dynamics are modeled via a spline kernel function derived from experimental data of organic memristive transistors. With an algorithmic complexity of O(N), SC-LR enables efficient hardware implementation. Evaluated on MNIST and Fashion-MNIST, SC-LR achieves 2.3–4.1% higher classification accuracy and 3.8× faster training than standard STDP, while preserving biological plausibility, learning efficacy, and computational scalability.

Technology Category

Application Category

📝 Abstract
We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs) that relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing. SADP generalizes classical Spike-Timing-Dependent Plasticity (STDP) by replacing pairwise temporal updates with population-level correlation metrics such as Cohen's kappa. The SADP update rule admits linear-time complexity and supports efficient hardware implementation via bitwise logic. Empirical results on MNIST and Fashion-MNIST show that SADP, especially when equipped with spline-based kernels derived from our experimental iontronic organic memtransistor device data, outperforms classical STDP in both accuracy and runtime. Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.
Problem

Research questions and friction points this paper is trying to address.

Develops scalable learning rule for spiking neural networks
Replaces precise spike timing with population correlation metrics
Bridges biological plausibility with computational efficiency in neuromorphic systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses population-level correlation metrics for learning
Replaces pairwise timing with spike train agreement
Enables linear-time complexity and hardware efficiency
🔎 Similar Papers
No similar papers found.
S
Saptarshi Bej
School of Data Science, Indian Institute of Science Education and Research Thiruvananthapuram, India
M
Muhammed Sahad E
School of Data Science, Indian Institute of Science Education and Research Thiruvananthapuram, India
G
Gouri Lakshmi
School of Data Science, Indian Institute of Science Education and Research Thiruvananthapuram, India
Harshit Kumar
Harshit Kumar
Whiterabbit.ai, Inc.
Deep LearningSecurityHardware Security and Trust
P
Pritam Kar
School of Data Science, Indian Institute of Science Education and Research Thiruvananthapuram, India
B
Bikas C Das
School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, India