Bayesian Speech synthesizers Can Learn from Multiple Teachers

πŸ“… 2025-10-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing codec-based TTS models suffer from insufficient robustness of pretrained speech encoders and quantization artifacts in voice cloning, while continuous-value autoregressive TTS approaches face challenges including weak speech pattern modeling and unreliable sampling strategies. To address these issues, we propose BELLEβ€”the first framework to integrate Bayesian evidence learning into continuous-value autoregressive TTS, directly generating mel-spectrogram frames with calibrated uncertainty estimates. Methodologically, BELLE models output uncertainty via Gaussian distributions, synthesizes diverse speech samples using multiple teacher TTS models, and enhances generalization through knowledge distillation. Experiments demonstrate that BELLE achieves performance on par with state-of-the-art open-source TTS models using only one-tenth of the training data. It maintains high-quality, stable speech synthesis under low-resource conditions, significantly reducing data dependency while preserving fidelity and robustness.

Technology Category

Application Category

πŸ“ Abstract
Codec-based text-to-speech (TTS) models have recently gained traction for their efficiency and strong performance in voice cloning. However, codec-based TTS faces limitations due to the challenges of pretraining robust speech codecs and the quality degradation introduced by quantization errors. Emerging evidence suggests that continuous-valued generative models can alleviate these issues and serve as a promising alternative. Yet, effectively modelling diverse speech patterns and developing reliable sampling strategies for continuous-valued autoregressive (AR) TTS remains underexplored. In this work, we propose BELLE, Bayesian evidential learning with language modelling for TTS, a novel continuous-valued AR framework that directly predicts mel-spectrograms from textual input. BELLE treats each mel-spectrogram frame as a Gaussian distribution sampled from a learned hyper distribution, enabling principled uncertainty estimation, particularly in scenarios with parallel data (i.e., one text-audio prompt paired with multiple speech samples). To obtain such data, diverse speech samples are synthesized using multiple pre-trained TTS models given the same text-audio prompts, which are distilled into BELLE via Bayesian evidential learning. Experimental results indicate that BELLE demonstrates highly competitive performance compared with the current best open-source TTS models, even though BELLE is trained on a large amount of synthetic data and uses only approximately one-tenth of their training data. Audio samples generated by BELLE are available at https://belletts.github.io/Belle/. The code, checkpoints, and synthetic data will be released after the paper is accepted.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of codec-based TTS with quantization errors
Models diverse speech patterns using continuous-valued autoregressive framework
Enables uncertainty estimation when learning from multiple speech samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian evidential learning framework for TTS
Predicts mel-spectrograms as Gaussian distributions
Distills knowledge from multiple pre-trained TTS models
πŸ”Ž Similar Papers
No similar papers found.
Z
Ziyang Zhang
Tsinghua University, Shanghai Artificial Intelligence Laboratory
Y
Yifan Gao
Tsinghua University, Shanghai Artificial Intelligence Laboratory
Xuenan Xu
Xuenan Xu
Shanghai Jiao Tong University
audio generationaudio understandingspeech synthesis
B
Baoxiangli
Tsinghua University, Shanghai Artificial Intelligence Laboratory
W
Wen Wu
Tsinghua University, Shanghai Artificial Intelligence Laboratory
C
Chao Zhang
Tsinghua University, Shanghai Artificial Intelligence Laboratory