🤖 AI Summary
To address the challenge of training spiking neural networks (SNNs) on resource-constrained edge devices, this paper proposes TESS—a fully spatiotemporal local learning rule. TESS relies exclusively on intraneuronal local signals (spikes, membrane potentials, and eligibility traces) to perform temporal and spatial credit assignment, eliminating the need for backpropagation-through-time, gradient storage, or global temporal dependencies. It is the first rule to rigorously satisfy both strict temporal and spatial locality: computational and memory complexity scale linearly with neuron count and are independent of simulation duration. Integrating spike-timing-dependent plasticity (STDP), neural activity synchronization, and biologically inspired plasticity mechanisms, TESS enables purely spike-driven weight updates. Evaluated on DVS Gesture, CIFAR10-DVS, and neuromorphic sequential CIFAR10/100 benchmarks, TESS achieves accuracy within ≤1.4% of BPTT while drastically reducing training memory footprint and runtime—establishing a new paradigm for efficient SNN training on edge hardware.
📝 Abstract
The demand for low-power inference and training of deep neural networks (DNNs) on edge devices has intensified the need for algorithms that are both scalable and energy-efficient. While spiking neural networks (SNNs) allow for efficient inference by processing complex spatio-temporal dynamics in an event-driven fashion, training them on resource-constrained devices remains challenging due to the high computational and memory demands of conventional error backpropagation (BP)-based approaches. In this work, we draw inspiration from biological mechanisms such as eligibility traces, spike-timing-dependent plasticity, and neural activity synchronization to introduce TESS, a temporally and spatially local learning rule for training SNNs. Our approach addresses both temporal and spatial credit assignments by relying solely on locally available signals within each neuron, thereby allowing computational and memory overheads to scale linearly with the number of neurons, independently of the number of time steps. Despite relying on local mechanisms, we demonstrate performance comparable to the backpropagation through time (BPTT) algorithm, within $sim1.4$ accuracy points on challenging computer vision scenarios relevant at the edge, such as the IBM DVS Gesture dataset, CIFAR10-DVS, and temporal versions of CIFAR10, and CIFAR100. Being able to produce comparable performance to BPTT while keeping low time and memory complexity, TESS enables efficient and scalable on-device learning at the edge.