🤖 AI Summary
This work addresses next-generation neuromorphic computing by bridging neuroscience mechanisms with artificial intelligence, balancing biological plausibility and hardware efficiency. It proposes NeuroAI Temporal Neural Networks (NeuTNNs), a novel architecture that, for the first time, integrates neuron models featuring active dendrites and proximal-distal compartmentalization into spiking neural networks, along with a customizable NeuTNN microarchitecture. To enable efficient algorithm-to-hardware mapping, the authors develop NeuTNNGen, a PyTorch-to-layout toolchain incorporating synaptic pruning to reduce resource overhead. Experimental results demonstrate that NeuTNNs outperform existing temporal neural network approaches in both accuracy and energy efficiency on UCR time-series, MNIST, and Place Cells benchmarks. Furthermore, the pruning strategy achieves 30–50% hardware cost reduction without compromising performance.
📝 Abstract
Leading experts from both communities have suggested the need to (re)connect research in neuroscience and artificial intelligence (AI) to accelerate the development of next-generation AI innovations. They term this convergence as NeuroAI. Previous research has established temporal neural networks (TNNs) as a promising neuromorphic approach toward biological intelligence and efficiency. We fully embrace NeuroAI and propose a new category of TNNs we call NeuroAI TNNs (NeuTNNs) with greater capability and hardware efficiency by adopting neuroscience findings, including a neuron model with active dendrites and a hierarchy of distal and proximal segments. This work introduces a PyTorch-to-layout tool suite (NeuTNNGen) to design application-specific NeuTNNs. Compared to previous TNN designs, NeuTNNs achieve superior performance and efficiency. We demonstrate NeuTNNGen's capabilities using three example applications: 1) UCR time series benchmarks, 2) MNIST design exploration, and 3) Place Cells design for neocortical reference frames. We also explore using synaptic pruning to further reduce synapse counts and hardware costs by 30-50% while maintaining model precision across diverse sensory modalities. NeuTNNGen can facilitate the design of application-specific energy-efficient NeuTNNs for the next generation of NeuroAI computing systems.