Hybrid Spiking Vision Transformer for Object Detection with Event Cameras

πŸ“… 2025-05-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of spatiotemporal feature modeling and weak long-range temporal dependency capture in event-camera-based object detection, this paper proposes the Hybrid Spiking Vision Transformer (HsVT)β€”the first architecture to integrate spiking neural networks (SNNs) with Vision Transformers, enabling asynchronous joint spatiotemporal encoding with fine-grained spatial representation and dynamic temporal modeling. Methodologically, we introduce a spike-driven hybrid attention mechanism tailored for efficient, sparse processing of Address-Event Representation (AER) event streams. Our contributions include: (1) the first lightweight, privacy-preserving benchmark dataset for event-camera-based fall detection; and (2) state-of-the-art detection accuracy on both GEN1 and our proprietary dataset, achieved with 32% fewer parameters and a 2.1Γ— improvement in energy efficiency.

Technology Category

Application Category

πŸ“ Abstract
Event-based object detection has gained increasing attention due to its advantages such as high temporal resolution, wide dynamic range, and asynchronous address-event representation. Leveraging these advantages, Spiking Neural Networks (SNNs) have emerged as a promising approach, offering low energy consumption and rich spatiotemporal dynamics. To further enhance the performance of event-based object detection, this study proposes a novel hybrid spike vision Transformer (HsVT) model. The HsVT model integrates a spatial feature extraction module to capture local and global features, and a temporal feature extraction module to model time dependencies and long-term patterns in event sequences. This combination enables HsVT to capture spatiotemporal features, improving its capability to handle complex event-based object detection tasks. To support research in this area, we developed and publicly released The Fall Detection Dataset as a benchmark for event-based object detection tasks. This dataset, captured using an event-based camera, ensures facial privacy protection and reduces memory usage due to the event representation format. We evaluated the HsVT model on GEN1 and Fall Detection datasets across various model sizes. Experimental results demonstrate that HsVT achieves significant performance improvements in event detection with fewer parameters.
Problem

Research questions and friction points this paper is trying to address.

Enhancing event-based object detection with hybrid spike vision Transformer
Integrating spatial and temporal feature extraction for spatiotemporal patterns
Providing a privacy-protected benchmark dataset for event-based detection tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Spiking Vision Transformer for object detection
Combines spatial and temporal feature extraction modules
Achieves high performance with fewer parameters
πŸ”Ž Similar Papers
No similar papers found.
Q
Qi Xu
School of Computer Science and Technology, Dalian University of Technology, Dalian, China
Jie Deng
Jie Deng
Professor, University of Pennsylvania
lymphedemasymptom managementcancer survivorshiponcology nursing
J
Jiangrong Shen
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xian, China; National Key Lab of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University, Xian, China; State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
B
Biwu Chen
Shanghai Radio Equipment Research Institute, Shanghai, China
Huajin Tang
Huajin Tang
Zhejiang University, China
Brain-inspired AIneuroroboticsspiking neural networksbrain-inspired computing
Gang Pan
Gang Pan
Tianjin University
Computer visionMultimodalAI