Spiking Vision Transformer with Saccadic Attention

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Spiking neural network (SNN)-based Vision Transformers (ViTs) suffer from weak spatial correlations and limited temporal interactions due to the mismatch between conventional self-attention mechanisms and the intrinsic spatiotemporal dynamics of spiking neurons, resulting in inferior accuracy and efficiency compared to artificial neural networks (ANNs). Method: This paper proposes a spike-driven Vision Transformer tailored for edge vision, centered on a biologically inspired Spiking Saccadic Self-Attention (SSSA) mechanism. SSSA jointly models spatial structure via spike-distribution-guided attention and enables dynamic temporal interaction through a focus-aware temporal module. Contribution/Results: The design preserves linear computational complexity while significantly enhancing cross-neuronal spatiotemporal coordination. Evaluated across multiple vision benchmarks, the proposed model achieves state-of-the-art (SOTA) accuracy among SNN-based ViTs and delivers substantially improved energy efficiency—making it highly suitable for low-power edge intelligence devices.

Technology Category

Application Category

📝 Abstract
The combination of Spiking Neural Networks (SNNs) and Vision Transformers (ViTs) holds potential for achieving both energy efficiency and high performance, particularly suitable for edge vision applications. However, a significant performance gap still exists between SNN-based ViTs and their ANN counterparts. Here, we first analyze why SNN-based ViTs suffer from limited performance and identify a mismatch between the vanilla self-attention mechanism and spatio-temporal spike trains. This mismatch results in degraded spatial relevance and limited temporal interactions. To address these issues, we draw inspiration from biological saccadic attention mechanisms and introduce an innovative Saccadic Spike Self-Attention (SSSA) method. Specifically, in the spatial domain, SSSA employs a novel spike distribution-based method to effectively assess the relevance between Query and Key pairs in SNN-based ViTs. Temporally, SSSA employs a saccadic interaction module that dynamically focuses on selected visual areas at each timestep and significantly enhances whole scene understanding through temporal interactions. Building on the SSSA mechanism, we develop a SNN-based Vision Transformer (SNN-ViT). Extensive experiments across various visual tasks demonstrate that SNN-ViT achieves state-of-the-art performance with linear computational complexity. The effectiveness and efficiency of the SNN-ViT highlight its potential for power-critical edge vision applications.
Problem

Research questions and friction points this paper is trying to address.

SNN-based ViTs performance gap
Mismatch in self-attention mechanism
Enhanced spatial-temporal spike interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Saccadic Spike Self-Attention method
Spike distribution-based relevance assessment
Dynamic temporal interaction enhancement
🔎 Similar Papers
No similar papers found.
S
Shuai Wang
University of Electronic Science and Technology of China
M
Malu Zhang
University of Electronic Science and Technology of China
Dehao Zhang
Dehao Zhang
University of Electronic Science and Technology of China
Spiking Neural Network
A
A. Belatreche
Northumbria University
Y
Yichen Xiao
University of Electronic Science and Technology of China
Y
Yu Liang
University of Electronic Science and Technology of China
Yimeng Shan
Yimeng Shan
Liaoning technical university
Spiking Neural NetworksNeuromorphic VisionSingle Object TrackingEvent Camera
Q
Qian Sun
University of Electronic Science and Technology of China
E
Enqi Zhang
University of Electronic Science and Technology of China
Y
Yang Yang
University of Electronic Science and Technology of China