Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the weak theoretical foundation of discrete-time leaky integrate-and-fire spiking neural networks (LIF-SNNs) by establishing the first rigorous characterization of their function approximation capability and input-space partitioning complexity. Methodologically, we introduce a theoretical framework based on piecewise-constant function modeling and polyhedral partition analysis, deriving tight lower bounds on the minimum network size required to approximate continuous functions, and quantifying the synergistic impact of temporal depth (number of time steps) and architectural depth (number of layers) on representational power. Key contributions are: (1) the first provable function approximation bounds for discrete-time LIF-SNNs; (2) the identification of temporal depth—not activation nonlinearity—as the primary driver of expressive capacity, distinguishing SNNs fundamentally from artificial neural networks (ANNs); and (3) empirical validation of the theoretically predicted exponential growth in partition complexity with temporal depth. These results establish a new paradigm for principled architecture design and theoretical analysis of SNNs.

Technology Category

Application Category

📝 Abstract
Recent years have seen significant progress in developing spiking neural networks (SNNs) as a potential solution to the energy challenges posed by conventional artificial neural networks (ANNs). However, our theoretical understanding of SNNs remains relatively limited compared to the ever-growing body of literature on ANNs. In this paper, we study a discrete-time model of SNNs based on leaky integrate-and-fire (LIF) neurons, referred to as discrete-time LIF-SNNs, a widely used framework that still lacks solid theoretical foundations. We demonstrate that discrete-time LIF-SNNs with static inputs and outputs realize piecewise constant functions defined on polyhedral regions, and more importantly, we quantify the network size required to approximate continuous functions. Moreover, we investigate the impact of latency (number of time steps) and depth (number of layers) on the complexity of the input space partitioning induced by discrete-time LIF-SNNs. Our analysis highlights the importance of latency and contrasts these networks with ANNs employing piecewise linear activation functions. Finally, we present numerical experiments to support our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Theoretical understanding of discrete-time LIF-SNNs is limited
Quantify network size needed for continuous function approximation
Impact of latency and depth on input space partitioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete-time LIF-SNNs model for theoretical analysis
Quantify network size for continuous function approximation
Analyze latency and depth impact on partitioning
🔎 Similar Papers
D
Duc Anh Nguyen
Department of Mathematics, Ludwig-Maximilians-Universität München, Germany
E
Ernesto Araya
Department of Mathematics, Ludwig-Maximilians-Universität München, Germany
Adalbert Fono
Adalbert Fono
LMU Munich
mathematics of deep learning
Gitta Kutyniok
Gitta Kutyniok
Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, LMU Munich
Applied Harmonic AnalysisArtificial IntelligenceData ScienceImaging ScienceInverse Problems