🤖 AI Summary
This paper addresses the absence of a unified theoretical characterization of scaling laws—across time, space, and energy—for neuromorphic computing (NMC) relative to the von Neumann architecture. We propose the first general theoretical framework for scaling analysis in NMC. Methodologically, we formulate a dynamic-state-based neuromorphic computation model, where energy consumption is modeled as a function of the algorithm’s state derivative—not absolute operation count—as in conventional architectures; we further integrate algorithmic complexity theory with energy-aware analysis for cross-paradigm comparison. Our key contributions are: (i) the first formal demonstration that NMC exhibits sublinear energy scaling for sparse, iterative workloads (e.g., optimization and large-scale sampling), outperforming von Neumann systems; and (ii) the establishment of a rigorous theoretical foundation for its “state-driven” energy efficiency, yielding quantifiable design principles and evaluation metrics for low-power intelligent computing.
📝 Abstract
Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), however the computational value proposition has been difficult to define precisely.
Here, we explain how NMC should be seen as general-purpose and programmable even though it differs considerably from a conventional stored-program architecture. We show that the time and space scaling of NMC is equivalent to that of a theoretically infinite processor conventional system, however the energy scaling is significantly different. Specifically, the energy of conventional systems scales with absolute algorithm work, whereas the energy of neuromorphic systems scales with the derivative of algorithm state. The unique characteristics of NMC architectures make it well suited for different classes of algorithms than conventional multi-core systems like GPUs that have been optimized for dense numerical applications such as linear algebra. In contrast, the unique characteristics of NMC make it ideally suited for scalable and sparse algorithms whose activity is proportional to an objective function, such as iterative optimization and large-scale sampling (e.g., Monte Carlo).