๐ค AI Summary
This work investigates learning stability and generalization of spiking neural networks (SNNs) under nonnegative weight constraints. We propose a novel architecture integrating affine encoders-decoders with nonnegative-weight spiking neurons, and analyze it via covering number theory and Barron function approximation theory. Our analysis establishes, for the first time, a depth-independent generalization bound for such SNNs; theoretically achieves rate-optimal approximation of ReLU networks; and overcomes the fundamental bottleneck wherein conventional SNN generalization degrades with depth. The design ensures parameter continuity and gradient-descent training stability. Experiments on standard benchmarks demonstrate competitive accuracy andโcruciallyโa near-constant generalization error across increasing depths, empirically validating our theoretical predictions. The core contribution is the first provably generalizable, trainably stable, and depth-robust framework for nonnegative-weight SNNs.
๐ Abstract
We study the learning problem associated with spiking neural networks. Specifically, we focus on spiking neural networks composed of simple spiking neurons having only positive synaptic weights, equipped with an affine encoder and decoder. These neural networks are shown to depend continuously on their parameters, which facilitates classical covering number-based generalization statements and supports stable gradient-based training. We demonstrate that the positivity of the weights continues to enable a wide range of expressivity results, including rate-optimal approximation of smooth functions and dimension-independent approximation of Barron regular functions. In particular, we show in theory and simulations that affine spiking neural networks are capable of approximating shallow ReLU neural networks. Furthermore, we apply these neural networks to standard machine learning benchmarks, reaching competitive results. Finally, and remarkably, we observe that from a generalization perspective, contrary to feedforward neural networks or previous results for general spiking neural networks, the depth has little to no adverse effect on the generalization capabilities.