Spiking Vocos: An Energy-Efficient Neural Vocoder

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive energy consumption of neural vocoders on edge devices, this paper proposes an ultra-low-power spiking neural vocoder. Methodologically: (1) a Spiking ConvNeXt module with amplitude shortcut paths is designed to alleviate information bottlenecks in spiking neural networks (SNNs); (2) a self-architectural distillation strategy enables efficient knowledge transfer from artificial neural networks (ANNs) to SNNs; and (3) a lightweight time-shift module enhances temporal modeling, while event-driven computation and convolution acceleration reduce multiply-accumulate operations. Experiments show that the proposed vocoder consumes only 14.7% of the energy required by its ANN counterpart, achieving competitive speech quality—UTMOS 3.74 and PESQ 3.45—on par with the ANN baseline. To our knowledge, this is the first SNN-based vocoder architecture that achieves high-fidelity speech synthesis while delivering substantial energy efficiency for edge deployment.

Technology Category

Application Category

📝 Abstract
Despite the remarkable progress in the synthesis speed and fidelity of neural vocoders, their high energy consumption remains a critical barrier to practical deployment on computationally restricted edge devices. Spiking Neural Networks (SNNs), widely recognized for their high energy efficiency due to their event-driven nature, offer a promising solution for low-resource scenarios. In this paper, we propose Spiking Vocos, a novel spiking neural vocoder with ultra-low energy consumption, built upon the efficient Vocos framework. To mitigate the inherent information bottleneck in SNNs, we design a Spiking ConvNeXt module to reduce Multiply-Accumulate (MAC) operations and incorporate an amplitude shortcut path to preserve crucial signal dynamics. Furthermore, to bridge the performance gap with its Artificial Neural Network (ANN) counterpart, we introduce a self-architectural distillation strategy to effectively transfer knowledge. A lightweight Temporal Shift Module is also integrated to enhance the model's ability to fuse information across the temporal dimension with negligible computational overhead. Experiments demonstrate that our model achieves performance comparable to its ANN counterpart, with UTMOS and PESQ scores of 3.74 and 3.45 respectively, while consuming only 14.7% of the energy. The source code is available at https://github.com/pymaster17/Spiking-Vocos.
Problem

Research questions and friction points this paper is trying to address.

Reducing energy consumption of neural vocoders for edge devices
Overcoming information bottleneck in Spiking Neural Networks
Bridging performance gap between SNNs and ANNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking ConvNeXt module reduces MAC operations
Amplitude shortcut path preserves signal dynamics
Self-architectural distillation transfers ANN knowledge
🔎 Similar Papers
No similar papers found.
Yukun Chen
Yukun Chen
Pieces Technologies Inc.
Natural Language Processing
Z
Zhaoxi Mu
Xi’an Jiaotong University, Xi’an, China
A
Andong Li
Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
Peilin Li
Peilin Li
National University of Singapore
Machine LearningArchitectureGenerative Design
X
Xinyu Yang
Xi’an Jiaotong University, Xi’an, China