🤖 AI Summary
Addressing the challenges of cross-modal knowledge transfer from RGB images to event-based data—stemming from substantial distributional discrepancies between Dynamic Vision Sensors (DVS) and RGB modalities, and severe scarcity of labeled DVS data—this paper proposes a fine-grained temporal-step mixing method tailored for Spiking Neural Networks (SNNs). Our approach: (1) exploits the asynchronous temporal dynamics of SNNs to align and interpolate RGB and DVS inputs across multiple time steps; (2) introduces a modality-aware temporal-step Mixup mechanism to enhance cross-modal feature alignment; and (3) incorporates a modality-aware multi-task auxiliary learning objective to mitigate modality shift. Evaluated on multiple standard benchmarks, our method significantly improves SNN performance on image classification tasks, enabling efficient and robust RGB→DVS semantic knowledge transfer. It outperforms existing methods in both generalization capability and training stability.
📝 Abstract
The integration of event cameras and spiking neural networks holds great promise for energy-efficient visual processing. However, the limited availability of event data and the sparse nature of DVS outputs pose challenges for effective training. Although some prior work has attempted to transfer semantic knowledge from RGB datasets to DVS, they often overlook the significant distribution gap between the two modalities. In this paper, we propose Time-step Mixup knowledge transfer (TMKT), a novel fine-grained mixing strategy that exploits the asynchronous nature of SNNs by interpolating RGB and DVS inputs at various time-steps. To enable label mixing in cross-modal scenarios, we further introduce modality-aware auxiliary learning objectives. These objectives support the time-step mixup process and enhance the model's ability to discriminate effectively across different modalities. Our approach enables smoother knowledge transfer, alleviates modality shift during training, and achieves superior performance in spiking image classification tasks. Extensive experiments demonstrate the effectiveness of our method across multiple datasets. The code will be released after the double-blind review process.