🤖 AI Summary
To address the conflicts between local plasticity and supervised objectives, high memory overhead, and difficulty in integrating biological signals in deep Spiking Neural Networks (SNNs), this paper proposes DA-SSDP—a local plasticity mechanism integrating dopamine neuromodulation and spike synchrony. Its core innovation is synchrony gating: a fixed, batch-level gating mechanism that correlates synchrony metrics with loss to automatically assess the validity of local signals; when invalid, it degrades to a lightweight regularization. DA-SSDP stores only binary spike indicators and Gaussian-kernel-encoded first-spike latencies, drastically reducing memory footprint. It is fully compatible with standard surrogate gradient backpropagation without requiring architectural modifications. Evaluated on CIFAR-10/100, CIFAR10-DVS, and ImageNet-1K, DA-SSDP improves top-1 accuracy by 0.42%, 0.99%, 0.1%, and 0.73%, respectively, with controlled increases in training cost and consistently enhanced performance.
📝 Abstract
While surrogate backpropagation proves useful for training deep spiking neural networks (SNNs), incorporating biologically inspired local signals on a large scale remains challenging. This difficulty stems primarily from the high memory demands of maintaining accurate spike-timing logs and the potential for purely local plasticity adjustments to clash with the supervised learning goal. To effectively leverage local signals derived from spiking neuron dynamics, we introduce Dopamine-Modulated Spike-Synchrony-Dependent Plasticity (DA-SSDP), a synchrony-based rule that is sensitive to loss and brings a synchrony-based local learning signal to the model. DA-SSDP condenses spike patterns into a synchrony metric at the batch level. An initial brief warm-up phase assesses its relationship to the task loss and sets a fixed gate that subsequently adjusts the local update's magnitude. In cases where synchrony proves unrelated to the task, the gate settles at one, simplifying DA-SSDP to a basic two-factor synchrony mechanism that delivers minor weight adjustments driven by concurrent spike firing and a Gaussian latency function. These small weight updates are only added to the network`s deeper layers following the backpropagation phase, and our tests showed this simplified version did not degrade performance and sometimes gave a small accuracy boost, serving as a regularizer during training. The rule stores only binary spike indicators and first-spike latencies with a Gaussian kernel. Without altering the model structure or optimization routine, evaluations on benchmarks like CIFAR-10 (+0.42%), CIFAR-100 (+0.99%), CIFAR10-DVS (+0.1%), and ImageNet-1K (+0.73%) demonstrated consistent accuracy gains, accompanied by a minor increase in computational overhead. Our code is available at https://github.com/NeuroSyd/DA-SSDP.