🤖 AI Summary
This work addresses the challenges of parameter redundancy and limited energy efficiency in spiking neural networks (SNNs) by introducing, for the first time, the convex sparsity-inducing Linearized Bregman Iteration (LBI) into SNN training. The approach integrates an enhanced AdaBreg optimizer—an Adam variant incorporating momentum and bias correction—and leverages Bregman distance minimization along with proximal soft-thresholding updates to enable efficient sparse learning. Evaluated on three neuromorphic benchmarks—SHD, SSC, and PSMNIST—the method achieves accuracy comparable to that of Adam while reducing the number of active parameters by approximately 50%, thereby substantially lowering model complexity and computational overhead.
📝 Abstract
Spiking Neural Networks (SNNs) offer an energy efficient alternative to conventional Artificial Neural Networks (ANNs) but typically still require a large number of parameters. This work introduces Linearized Bregman Iterations (LBI) as an optimizer for training SNNs, enforcing sparsity through iterative minimization of the Bregman distance and proximal soft thresholding updates. To improve convergence and generalization, we employ the AdaBreg optimizer, a momentum and bias corrected Bregman variant of Adam. Experiments on three established neuromorphic benchmarks, i.e. the Spiking Heidelberg Digits (SHD), the Spiking Speech Commands (SSC), and the Permuted Sequential MNIST (PSMNIST) datasets, show that LBI based optimization reduces the number of active parameters by about 50% while maintaining accuracy comparable to models trained with the Adam optimizer, demonstrating the potential of convex sparsity inducing methods for efficient neuromorphic learning.