Time-step Mixup for Efficient Spiking Knowledge Transfer from Appearance to Event Domain

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of cross-modal knowledge transfer from RGB images to event-based data—stemming from substantial distributional discrepancies between Dynamic Vision Sensors (DVS) and RGB modalities, and severe scarcity of labeled DVS data—this paper proposes a fine-grained temporal-step mixing method tailored for Spiking Neural Networks (SNNs). Our approach: (1) exploits the asynchronous temporal dynamics of SNNs to align and interpolate RGB and DVS inputs across multiple time steps; (2) introduces a modality-aware temporal-step Mixup mechanism to enhance cross-modal feature alignment; and (3) incorporates a modality-aware multi-task auxiliary learning objective to mitigate modality shift. Evaluated on multiple standard benchmarks, our method significantly improves SNN performance on image classification tasks, enabling efficient and robust RGB→DVS semantic knowledge transfer. It outperforms existing methods in both generalization capability and training stability.

Technology Category

Application Category

📝 Abstract
The integration of event cameras and spiking neural networks holds great promise for energy-efficient visual processing. However, the limited availability of event data and the sparse nature of DVS outputs pose challenges for effective training. Although some prior work has attempted to transfer semantic knowledge from RGB datasets to DVS, they often overlook the significant distribution gap between the two modalities. In this paper, we propose Time-step Mixup knowledge transfer (TMKT), a novel fine-grained mixing strategy that exploits the asynchronous nature of SNNs by interpolating RGB and DVS inputs at various time-steps. To enable label mixing in cross-modal scenarios, we further introduce modality-aware auxiliary learning objectives. These objectives support the time-step mixup process and enhance the model's ability to discriminate effectively across different modalities. Our approach enables smoother knowledge transfer, alleviates modality shift during training, and achieves superior performance in spiking image classification tasks. Extensive experiments demonstrate the effectiveness of our method across multiple datasets. The code will be released after the double-blind review process.
Problem

Research questions and friction points this paper is trying to address.

Addressing event data scarcity for spiking neural networks
Bridging distribution gap between RGB and DVS modalities
Enabling efficient knowledge transfer across visual domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-step Mixup for RGB-DVS interpolation
Modality-aware auxiliary learning objectives
Asynchronous SNN knowledge transfer enhancement
Y
Yuqi Xie
Faculty of Electrical Engineering and Computer Science, Ningbo University, China
S
Shuhan Ye
Nanyang Technological University, Singapore
C
Chong Wang
Faculty of Electrical Engineering and Computer Science, Ningbo University, China; Merchants’ Guild Economics and Cultural Intelligent Computing Laboratory, Ningbo University, China
Jiazhen Xu
Jiazhen Xu
The Australian National University
L
Le Shen
Faculty of Electrical Engineering and Computer Science, Ningbo University, China
Y
Yuanbin Qian
Faculty of Electrical Engineering and Computer Science, Ningbo University, China
J
Jiangbo Qian
Faculty of Electrical Engineering and Computer Science, Ningbo University, China; Merchants’ Guild Economics and Cultural Intelligent Computing Laboratory, Ningbo University, China