🤖 AI Summary
To address the challenges of modeling multi-scale temporal dynamics and incorporating non-aligned residual connections—hindered by the fixed-time-step assumption—in spiking neural networks (SNNs) for speech recognition, this paper proposes Temporal Remapping (TR) and Non-Aligned Residual (NAR) modules. The TR module explicitly captures multi-scale temporal dynamics through a brain-inspired remapping of the temporal dimension. The NAR module enables direct residual connections across variable-length spike sequences for the first time, relaxing the strict temporal alignment requirement previously imposed on residual structures in SNNs. Evaluated on the Spoken Symbol Corpus (SSC) and Spiking Heidelberg Digits (SHD) benchmarks, the proposed method achieves state-of-the-art accuracy of 81.02% and 96.04%, respectively—the highest reported performance for SNN-based speech classification to date.
📝 Abstract
Recently, it can be noticed that most models based on spiking neural networks (SNNs) only use a same level temporal resolution to deal with speech classification problems, which makes these models cannot learn the information of input data at different temporal scales. Additionally, owing to the different time lengths of the data before and after the sub-modules of many models, the effective residual connections cannot be applied to optimize the training processes of these models.To solve these problems, on the one hand, we reconstruct the temporal dimension of the audio spectrum to propose a novel method named as Temporal Reconstruction (TR) by referring the hierarchical processing process of the human brain for understanding speech. Then, the reconstructed SNN model with TR can learn the information of input data at different temporal scales and model more comprehensive semantic information from audio data because it enables the networks to learn the information of input data at different temporal resolutions. On the other hand, we propose the Non-Aligned Residual (NAR) method by analyzing the audio data, which allows the residual connection can be used in two audio data with different time lengths. We have conducted plentiful experiments on the Spiking Speech Commands (SSC), the Spiking Heidelberg Digits (SHD), and the Google Speech Commands v0.02 (GSC) datasets. According to the experiment results, we have achieved the state-of-the-art (SOTA) result 81.02% on SSC for the test classification accuracy of all SNN models, and we have obtained the SOTA result 96.04% on SHD for the classification accuracy of all models.