๐ค AI Summary
This work proposes a novel neural vocoder based on time-frequency domain range-nullspace decomposition, addressing key limitations of existing approachesโsuch as opaque modeling, high retraining costs, and difficulty in balancing parameters with performance. For the first time, range-nullspace decomposition theory is introduced into vocoder design, enabling a dual-path hierarchical encoder-decoder architecture that jointly models subbands and temporal dynamics. The framework further incorporates multi-condition data augmentation to support scalable inference. Despite its lightweight structure, the proposed method achieves state-of-the-art performance across multiple benchmarks, consistently outperforming current methods in both subjective and objective evaluations. This advancement significantly enhances model transparency, flexibility, and generalization capability.
๐ Abstract
Although deep neural networks have facilitated significant progress of neural vocoders in recent years, they usually suffer from intrinsic challenges like opaque modeling, inflexible retraining under different input configurations, and parameter-performance trade-off. These inherent hurdles can heavily impede the development of this field. To resolve these problems, in this paper, we propose a novel neural vocoder in the time-frequency (T-F) domain. Specifically, we bridge the connection between the classical range-null decomposition (RND) theory and the vocoder task, where the reconstruction of the target spectrogram is formulated into the superimposition between range-space and null-space. The former aims to project the representation in the original mel-domain into the target linear-scale domain, and the latter can be instantiated via neural networks to further infill the spectral details. To fully leverage the spectrum prior, an elaborate dual-path framework is devised, where the spectrum is hierarchically encoded and decoded, and the cross- and narrow-band modules are leveraged for effectively modeling along sub-band and time dimensions. To enable inference under various configurations, we propose a simple yet effective strategy, which transforms the multi-condition adaption in the inference stage into the data augmentation in the training stage. Comprehensive experiments are conducted on various benchmarks. Quantitative and qualitative results show that while enjoying lightweight network structure and scalable inference paradigm, the proposed framework achieves state-ofthe-art performance among existing advanced methods. Code is available at https://github.com/Andong-Li-speech/RNDVoC.