🤖 AI Summary
To address the energy inefficiency bottleneck in spiking neural networks (SNNs) caused by computationally expensive digital operations—particularly time-to-first-spike (TTFS) encoding, temporal decay function evaluation, and synaptic weight multiplication—this paper proposes Otters. Otters is the first architecture to exploit the intrinsic analog signal decay property of indium oxide (InOₓ) optoelectronic devices as a physical computing primitive, enabling direct analog co-processing of TTFS encoding and weighted decay while eliminating digital multiplication. We introduce a hardware–software co-design framework encompassing InOₓ device fabrication, quantization-to-spike conversion algorithms, and 22 nm process technology energy-efficiency modeling, and support sparse training for Transformer architectures. Evaluated on seven GLUE benchmark tasks, Otters achieves state-of-the-art accuracy while improving energy efficiency by 1.77× over prior SNNs, significantly reducing computational, memory access, and data movement overheads.
📝 Abstract
Spiking neural networks (SNNs) promise high energy efficiency, particularly with time-to-first-spike (TTFS) encoding, which maximizes sparsity by emitting at most one spike per neuron. However, such energy advantage is often unrealized because inference requires evaluating a temporal decay function and subsequent multiplication with the synaptic weights. This paper challenges this costly approach by repurposing a physical hardware `bug', namely, the natural signal decay in optoelectronic devices, as the core computation of TTFS. We fabricated a custom indium oxide optoelectronic synapse, showing how its natural physical decay directly implements the required temporal function. By treating the device's analog output as the fused product of the synaptic weight and temporal decay, optoelectronic synaptic TTFS (named Otters) eliminates these expensive digital operations. To use the Otters paradigm in complex architectures like the transformer, which are challenging to train directly due to the sparsity issue, we introduce a novel quantized neural network-to-SNN conversion algorithm. This complete hardware-software co-design enables our model to achieve state-of-the-art accuracy across seven GLUE benchmark datasets and demonstrates a 1.77$ imes$ improvement in energy efficiency over previous leading SNNs, based on a comprehensive analysis of compute, data movement, and memory access costs using energy measurements from a commercial 22nm process. Our work thus establishes a new paradigm for energy-efficient SNNs, translating fundamental device physics directly into powerful computational primitives. All codes and data are open source.