A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing artificial and spiking neural network (ANN/SNN) training methods predominantly rely on synaptic plasticity, neglecting structural plasticity—i.e., dynamic synapse formation and elimination—as observed in biological brains. Moreover, mainstream pruning techniques optimize only inference efficiency, not training cost. Method: We propose the first GPU-accelerated structural plasticity framework, built upon the GeNN simulator and integrating e-prop supervised learning with the DEEP R algorithm to enable online synaptic rewiring during training of sparse SNNs. Contribution/Results: Our framework departs from conventional dense training paradigms, achieving up to 10× speedup in training time while preserving classification accuracy. It supports real-time large-scale SNN simulation and rapid, autonomous topology self-organization. This advances energy-efficient neuromorphic computing and robust learning by co-optimizing connectivity structure and functional performance throughout training.

Technology Category

Application Category

📝 Abstract
The majority of research in both training Artificial Neural Networks (ANNs) and modeling learning in biological brains focuses on synaptic plasticity, where learning equates to changing the strength of existing connections. However, in biological brains, structural plasticity - where new connections are created and others removed - is also vital, not only for effective learning but also for recovery from damage and optimal resource usage. Inspired by structural plasticity, pruning is often used in machine learning to remove weak connections from trained models to reduce the computational requirements of inference. However, the machine learning frameworks typically used for backpropagation-based training of both ANNs and Spiking Neural Networks (SNNs) are optimized for dense connectivity, meaning that pruning does not help reduce the training costs of ever-larger models. The GeNN simulator already supports efficient GPU-accelerated simulation of sparse SNNs for computational neuroscience and machine learning. Here, we present a new flexible framework for implementing GPU-accelerated structural plasticity rules and demonstrate this first using the e-prop supervised learning rule and DEEP R to train efficient, sparse SNN classifiers and then, in an unsupervised learning context, to learn topographic maps. Compared to baseline dense models, our sparse classifiers reduce training time by up to 10x while the DEEP R rewiring enables them to perform as well as the original models. We demonstrate topographic map formation in faster-than-realtime simulations, provide insights into the connectivity evolution, and measure simulation speed versus network size. The proposed framework will enable further research into achieving and maintaining sparsity in network structure and neural communication, as well as exploring the computational benefits of sparsity in a range of neuromorphic applications.
Problem

Research questions and friction points this paper is trying to address.

Developing GPU-accelerated structural plasticity framework for spiking neural networks
Reducing training costs through sparse connectivity in neural networks
Enabling efficient sparse SNN classifiers with flexible plasticity rules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible framework for GPU-accelerated structural plasticity rules
Sparse SNN classifiers trained with e-prop and DEEP R rewiring
Reduced training time while maintaining original model performance
🔎 Similar Papers
No similar papers found.