SynSacc: A Blender-to-V2E Pipeline for Synthetic Neuromorphic Eye-Movement Data and Sim-to-Real Spiking Model Training

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that eye movement classification—such as distinguishing saccades from fixations—is highly sensitive to temporal resolution and motion blur, limitations that conventional frame-based cameras struggle to overcome. To tackle this, the authors propose a novel approach leveraging synthetic event data: they construct oculomotor scenes in Blender and generate high-fidelity neuromorphic event streams using the V2E simulator. This study is the first to combine such synthetic event data with spiking neural networks (SNNs) for eye movement classification. After fine-tuning on real event data, the model achieves an accuracy of 0.83 on test sets and demonstrates robust performance across varying temporal resolutions. Compared to traditional artificial neural networks (ANNs), the SNN-based method substantially reduces computational overhead, enabling efficient and robust simulation-to-reality transfer.

Technology Category

Application Category

📝 Abstract
The study of eye movements, particularly saccades and fixations, are fundamental to understanding the mechanisms of human cognition and perception. Accurate classification of these movements requires sensing technologies capable of capturing rapid dynamics without distortion. Event cameras, also known as Dynamic Vision Sensors (DVS), provide asynchronous recordings of changes in light intensity, thereby eliminating motion blur inherent in conventional frame-based cameras and offering superior temporal resolution and data efficiency. In this study, we introduce a synthetic dataset generated with Blender to simulate saccades and fixations under controlled conditions. Leveraging Spiking Neural Networks (SNNs), we evaluate its robustness by training two architectures and finetuning on real event data. The proposed models achieve up to 0.83 accuracy and maintain consistent performance across varying temporal resolutions, demonstrating stability in eye movement classification. Moreover, the use of SNNs with synthetic event streams yields substantial computational efficiency gains over artificial neural network (ANN) counterparts, underscoring the utility of synthetic data augmentation in advancing event-based vision. All code and datasets associated with this work is available at https: //github.com/Ikhadija-5/SynSacc-Dataset.
Problem

Research questions and friction points this paper is trying to address.

eye movements
saccades
event cameras
Dynamic Vision Sensors
spiking neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Neuromorphic Data
Spiking Neural Networks
Event Camera
Sim-to-Real Transfer
Eye Movement Classification
🔎 Similar Papers
No similar papers found.
K
Khadija Iddrisu
Dublin City University, Dublin, Ireland
W
Waseem Shariff
University of Galway, Galway, Ireland
Suzanne Little
Suzanne Little
School of Computing, Dublin City University
Multimedia Information Retrieval
N
Noel O'Connor
Dublin City University, Dublin, Ireland