Robust Training of Singing Voice Synthesis Using Prior and Posterior Uncertainty

๐Ÿ“… 2025-12-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Publicly available singing voice datasets are scarce, leading to significant performance degradation of singing voice synthesis (SVS) models in long-tail scenariosโ€”such as skewed pitch distributions and rare vocal techniques. To address this, we propose a robust training framework grounded in prior and posterior uncertainty estimation. Specifically, we introduce the first differentiable, sample-level adversarial augmentation to enhance prior uncertainty modeling; and design a frame-level posterior uncertainty prediction module that dynamically focuses learning on low-confidence phonetic segments. Our approach unifies differentiable data augmentation, adversarial training, and uncertainty-aware end-to-end SVS modeling. Evaluations on bilingual (Chinese and Japanese) Opencpop and Ofuton-P datasets demonstrate substantial improvements under pitch imbalance and rare singing styles: mean opinion score (MOS) increases by 0.32 and voicing recall rate (VRR) by 4.7%, validating that uncertainty-aware training effectively enhances long-tail generalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Singing voice synthesis (SVS) has seen remarkable advancements in recent years. However, compared to speech and general audio data, publicly available singing datasets remain limited. In practice, this data scarcity often leads to performance degradation in long-tail scenarios, such as imbalanced pitch distributions or rare singing styles. To mitigate these challenges, we propose uncertainty-based optimization to improve the training process of end-to-end SVS models. First, we introduce differentiable data augmentation in the adversarial training, which operates in a sample-wise manner to increase the prior uncertainty. Second, we incorporate a frame-level uncertainty prediction module that estimates the posterior uncertainty, enabling the model to allocate more learning capacity to low-confidence segments. Empirical results on the Opencpop and Ofuton-P, across Chinese and Japanese, demonstrate that our approach improves performance in various perspectives.
Problem

Research questions and friction points this paper is trying to address.

Addresses data scarcity in singing voice synthesis datasets
Mitigates performance degradation in long-tail scenarios
Improves training robustness using uncertainty-based optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable data augmentation increases prior uncertainty
Frame-level uncertainty prediction estimates posterior uncertainty
Uncertainty-based optimization improves training robustness
๐Ÿ”Ž Similar Papers
No similar papers found.