π€ AI Summary
This work addresses the gap between theoretical energy efficiency and practical deployment constraints in spiking neural networks (SNNs), which often overlook real hardware costs such as data movement. To bridge this gap, the authors propose the Matterhorn architecture, integrating Masked Time-to-First-Spike (M-TTFS) encoding with a βdead zoneβ sparsification strategy to align spiking activity with input statistics and maximize sparsity. Furthermore, Matterhorn employs memristive synaptic units (MSUs) to enable analog compute-in-memory (CIM), thereby eliminating weight-access energy overhead. Evaluated on the GLUE benchmark, the proposed approach achieves an average accuracy improvement of 1.42% over existing SNNs and delivers a 2.31Γ higher energy efficiency, establishing a new state-of-the-art for SNN performance.
π Abstract
Spiking neural networks (SNNs) have emerged as a promising candidate for energy-efficient LLM inference. However, current energy evaluations for SNNs primarily focus on counting accumulate operations, and fail to account for real-world hardware costs such as data movement, which can consume nearly 80% of the total energy. In this paper, we propose Matterhorn, a spiking transformer that integrates a novel masked time-to-first-spike (M-TTFS) encoding method to reduce spike movement and a memristive synapse unit (MSU) to eliminate weight access overhead. M-TTFS employs a masking strategy that reassigns the zero-energy silent state (a spike train of all 0s) to the most frequent membrane potential rather than the lowest. This aligns the coding scheme with the data distribution, minimizing spike movement energy without information loss. We further propose a `dead zone'strategy that maximizes sparsity by mapping all values within a given range to the silent state. At the hardware level, the MSU utilizes compute-in-memory (CIM) technology to perform analog integration directly within memory, effectively removing weight access costs. On the GLUE benchmark, Matterhorn establishes a new state-of-the-art, surpassing existing SNNs by 1.42% in average accuracy while delivering a 2.31 times improvement in energy efficiency.