Stable Learning Using Spiking Neural Networks Equipped With Affine Encoders and Decoders

๐Ÿ“… 2024-04-06
๐Ÿ“ˆ Citations: 3
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates learning stability and generalization of spiking neural networks (SNNs) under nonnegative weight constraints. We propose a novel architecture integrating affine encoders-decoders with nonnegative-weight spiking neurons, and analyze it via covering number theory and Barron function approximation theory. Our analysis establishes, for the first time, a depth-independent generalization bound for such SNNs; theoretically achieves rate-optimal approximation of ReLU networks; and overcomes the fundamental bottleneck wherein conventional SNN generalization degrades with depth. The design ensures parameter continuity and gradient-descent training stability. Experiments on standard benchmarks demonstrate competitive accuracy andโ€”cruciallyโ€”a near-constant generalization error across increasing depths, empirically validating our theoretical predictions. The core contribution is the first provably generalizable, trainably stable, and depth-robust framework for nonnegative-weight SNNs.

Technology Category

Application Category

๐Ÿ“ Abstract
We study the learning problem associated with spiking neural networks. Specifically, we focus on spiking neural networks composed of simple spiking neurons having only positive synaptic weights, equipped with an affine encoder and decoder. These neural networks are shown to depend continuously on their parameters, which facilitates classical covering number-based generalization statements and supports stable gradient-based training. We demonstrate that the positivity of the weights continues to enable a wide range of expressivity results, including rate-optimal approximation of smooth functions and dimension-independent approximation of Barron regular functions. In particular, we show in theory and simulations that affine spiking neural networks are capable of approximating shallow ReLU neural networks. Furthermore, we apply these neural networks to standard machine learning benchmarks, reaching competitive results. Finally, and remarkably, we observe that from a generalization perspective, contrary to feedforward neural networks or previous results for general spiking neural networks, the depth has little to no adverse effect on the generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Studying learning stability in spiking neural networks.
Exploring expressivity of networks with positive weights.
Analyzing depth impact on generalization performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Affine encoders and decoders enhance stability
Positive synaptic weights enable broad expressivity
Depth minimally affects generalization performance
๐Ÿ”Ž Similar Papers
No similar papers found.
A
A. M. Neuman
University of Vienna, Faculty of Mathematics, Kolingasse 14-16, 1090 Wien
Dominik Dold
Dominik Dold
Marie Curie Fellow, University of Vienna
Artificial IntelligenceComputational NeuroscienceKnowledge GraphsPhysicsSpace
P
P. C. Petersen
University of Vienna, Faculty of Mathematics and Research Network Data Science @ Uni Vienna, Kolingasse 14-16, 1090 Wien