🤖 AI Summary
Spiking neural network (SNN)-based Vision Transformers (ViTs) suffer from weak spatial correlations and limited temporal interactions due to the mismatch between conventional self-attention mechanisms and the intrinsic spatiotemporal dynamics of spiking neurons, resulting in inferior accuracy and efficiency compared to artificial neural networks (ANNs). Method: This paper proposes a spike-driven Vision Transformer tailored for edge vision, centered on a biologically inspired Spiking Saccadic Self-Attention (SSSA) mechanism. SSSA jointly models spatial structure via spike-distribution-guided attention and enables dynamic temporal interaction through a focus-aware temporal module. Contribution/Results: The design preserves linear computational complexity while significantly enhancing cross-neuronal spatiotemporal coordination. Evaluated across multiple vision benchmarks, the proposed model achieves state-of-the-art (SOTA) accuracy among SNN-based ViTs and delivers substantially improved energy efficiency—making it highly suitable for low-power edge intelligence devices.
📝 Abstract
The combination of Spiking Neural Networks (SNNs) and Vision Transformers (ViTs) holds potential for achieving both energy efficiency and high performance, particularly suitable for edge vision applications. However, a significant performance gap still exists between SNN-based ViTs and their ANN counterparts. Here, we first analyze why SNN-based ViTs suffer from limited performance and identify a mismatch between the vanilla self-attention mechanism and spatio-temporal spike trains. This mismatch results in degraded spatial relevance and limited temporal interactions. To address these issues, we draw inspiration from biological saccadic attention mechanisms and introduce an innovative Saccadic Spike Self-Attention (SSSA) method. Specifically, in the spatial domain, SSSA employs a novel spike distribution-based method to effectively assess the relevance between Query and Key pairs in SNN-based ViTs. Temporally, SSSA employs a saccadic interaction module that dynamically focuses on selected visual areas at each timestep and significantly enhances whole scene understanding through temporal interactions. Building on the SSSA mechanism, we develop a SNN-based Vision Transformer (SNN-ViT). Extensive experiments across various visual tasks demonstrate that SNN-ViT achieves state-of-the-art performance with linear computational complexity. The effectiveness and efficiency of the SNN-ViT highlight its potential for power-critical edge vision applications.