🤖 AI Summary
Spiking neural networks (SNNs) suffer from limited expressive power and poor trainability. Method: We introduce the novel concept of “causal segments”—subregions of the input domain where output spike times are locally Lipschitz continuous with respect to both inputs and parameters—enabling piecewise causal structural analysis. Our approach integrates piecewise continuity analysis, Lipschitz stability theory, and large-scale spiking simulations. Contribution/Results: We establish the first theoretical framework characterizing SNNs’ piecewise causal structure, proving that the number of causal segments correlates positively with approximation capacity and strongly predicts training success. Notably, we discover that purely weight-based feedforward SNNs possess far more causal segments than previously anticipated—providing a theoretical foundation for interpretable training. On standard benchmarks, our method matches state-of-the-art performance; moreover, the causal segment count of initial parameters reliably forecasts convergence, significantly enhancing both trainability and interpretability of SNNs.
📝 Abstract
We introduce a novel concept for spiking neural networks (SNNs) derived from the idea of"linear pieces"used to analyse the expressiveness and trainability of artificial neural networks (ANNs). We prove that the input domain of SNNs decomposes into distinct causal regions where its output spike times are locally Lipschitz continuous with respect to the input spike times and network parameters. The number of such regions - which we call"causal pieces"- is a measure of the approximation capabilities of SNNs. In particular, we demonstrate in simulation that parameter initialisations which yield a high number of causal pieces on the training set strongly correlate with SNN training success. Moreover, we find that feedforward SNNs with purely positive weights exhibit a surprisingly high number of causal pieces, allowing them to achieve competitive performance levels on benchmark tasks. We believe that causal pieces are not only a powerful and principled tool for improving SNNs, but might also open up new ways of comparing SNNs and ANNs in the future.