๐ค AI Summary
Publicly available singing voice datasets are scarce, leading to significant performance degradation of singing voice synthesis (SVS) models in long-tail scenariosโsuch as skewed pitch distributions and rare vocal techniques. To address this, we propose a robust training framework grounded in prior and posterior uncertainty estimation. Specifically, we introduce the first differentiable, sample-level adversarial augmentation to enhance prior uncertainty modeling; and design a frame-level posterior uncertainty prediction module that dynamically focuses learning on low-confidence phonetic segments. Our approach unifies differentiable data augmentation, adversarial training, and uncertainty-aware end-to-end SVS modeling. Evaluations on bilingual (Chinese and Japanese) Opencpop and Ofuton-P datasets demonstrate substantial improvements under pitch imbalance and rare singing styles: mean opinion score (MOS) increases by 0.32 and voicing recall rate (VRR) by 4.7%, validating that uncertainty-aware training effectively enhances long-tail generalization.
๐ Abstract
Singing voice synthesis (SVS) has seen remarkable advancements in recent years. However, compared to speech and general audio data, publicly available singing datasets remain limited. In practice, this data scarcity often leads to performance degradation in long-tail scenarios, such as imbalanced pitch distributions or rare singing styles. To mitigate these challenges, we propose uncertainty-based optimization to improve the training process of end-to-end SVS models. First, we introduce differentiable data augmentation in the adversarial training, which operates in a sample-wise manner to increase the prior uncertainty. Second, we incorporate a frame-level uncertainty prediction module that estimates the posterior uncertainty, enabling the model to allocate more learning capacity to low-confidence segments. Empirical results on the Opencpop and Ofuton-P, across Chinese and Japanese, demonstrate that our approach improves performance in various perspectives.