🤖 AI Summary
To address the excessive energy consumption of neural vocoders on edge devices, this paper proposes an ultra-low-power spiking neural vocoder. Methodologically: (1) a Spiking ConvNeXt module with amplitude shortcut paths is designed to alleviate information bottlenecks in spiking neural networks (SNNs); (2) a self-architectural distillation strategy enables efficient knowledge transfer from artificial neural networks (ANNs) to SNNs; and (3) a lightweight time-shift module enhances temporal modeling, while event-driven computation and convolution acceleration reduce multiply-accumulate operations. Experiments show that the proposed vocoder consumes only 14.7% of the energy required by its ANN counterpart, achieving competitive speech quality—UTMOS 3.74 and PESQ 3.45—on par with the ANN baseline. To our knowledge, this is the first SNN-based vocoder architecture that achieves high-fidelity speech synthesis while delivering substantial energy efficiency for edge deployment.
📝 Abstract
Despite the remarkable progress in the synthesis speed and fidelity of neural vocoders, their high energy consumption remains a critical barrier to practical deployment on computationally restricted edge devices. Spiking Neural Networks (SNNs), widely recognized for their high energy efficiency due to their event-driven nature, offer a promising solution for low-resource scenarios. In this paper, we propose Spiking Vocos, a novel spiking neural vocoder with ultra-low energy consumption, built upon the efficient Vocos framework. To mitigate the inherent information bottleneck in SNNs, we design a Spiking ConvNeXt module to reduce Multiply-Accumulate (MAC) operations and incorporate an amplitude shortcut path to preserve crucial signal dynamics. Furthermore, to bridge the performance gap with its Artificial Neural Network (ANN) counterpart, we introduce a self-architectural distillation strategy to effectively transfer knowledge. A lightweight Temporal Shift Module is also integrated to enhance the model's ability to fuse information across the temporal dimension with negligible computational overhead. Experiments demonstrate that our model achieves performance comparable to its ANN counterpart, with UTMOS and PESQ scores of 3.74 and 3.45 respectively, while consuming only 14.7% of the energy. The source code is available at https://github.com/pymaster17/Spiking-Vocos.