🤖 AI Summary
This work addresses the low execution efficiency of Transformer architectures in Spiking Neural Networks (SNNs) on heterogeneous hardware, stemming from the difficulty in jointly exploiting spatiotemporal sparsity and enabling data reuse. To tackle this, we propose the Token-Time Bundle (TTB) spike data organization paradigm and a hardware-algorithm co-optimization framework. Our method introduces: (i) a novel containerized TTB representation for spikes; (ii) a hierarchical scheduling mechanism, Stratifier; (iii) bundle-level sparse-aware training (BSA) and error-constrained TTB pruning (ECP); and (iv) a reconfigurable AND-Accumulate spike attention core. The framework enables tight software-hardware co-optimization across heterogeneous dense/sparse core arrays. Experiments demonstrate an average 5.91× speedup and 6.11× energy efficiency improvement over state-of-the-art SNN accelerators, while maintaining higher accuracy across multiple benchmark datasets.
📝 Abstract
We present Bishop, the first dedicated hardware accelerator architecture and HW/SW co-design framework for spiking transformers that optimally represents, manages, and processes spike-based workloads while exploring spatiotemporal sparsity and data reuse. Specifically, we introduce the concept of Token-Time Bundle (TTB), a container that bundles spiking data of a set of tokens over multiple time points. Our heterogeneous accelerator architecture Bishop concurrently processes workload packed in TTBs and explores intra- and inter-bundle multiple-bit weight reuse to significantly reduce memory access. Bishop utilizes a stratifier, a dense core array, and a sparse core array to process MLP blocks and projection layers. The stratifier routes high-density spiking activation workload to the dense core and low-density counterpart to the sparse core, ensuring optimized processing tailored to the given spatiotemporal sparsity level. To further reduce data access and computation, we introduce a novel Bundle Sparsity-Aware (BSA) training pipeline that enhances not only the overall but also structured TTB-level firing sparsity. Moreover, the processing efficiency of self-attention layers is boosted by the proposed Error-Constrained TTB Pruning (ECP), which trims activities in spiking queries, keys, and values both before and after the computation of spiking attention maps with a well-defined error bound. Finally, we design a reconfigurable TTB spiking attention core to efficiently compute spiking attention maps by executing highly simplified"AND"and"Accumulate"operations. On average, Bishop achieves a 5.91x speedup and 6.11x improvement in energy efficiency over previous SNN accelerators, while delivering higher accuracy across multiple datasets.