π€ AI Summary
Existing codec-based TTS models suffer from insufficient robustness of pretrained speech encoders and quantization artifacts in voice cloning, while continuous-value autoregressive TTS approaches face challenges including weak speech pattern modeling and unreliable sampling strategies. To address these issues, we propose BELLEβthe first framework to integrate Bayesian evidence learning into continuous-value autoregressive TTS, directly generating mel-spectrogram frames with calibrated uncertainty estimates. Methodologically, BELLE models output uncertainty via Gaussian distributions, synthesizes diverse speech samples using multiple teacher TTS models, and enhances generalization through knowledge distillation. Experiments demonstrate that BELLE achieves performance on par with state-of-the-art open-source TTS models using only one-tenth of the training data. It maintains high-quality, stable speech synthesis under low-resource conditions, significantly reducing data dependency while preserving fidelity and robustness.
π Abstract
Codec-based text-to-speech (TTS) models have recently gained traction for their efficiency and strong performance in voice cloning. However, codec-based TTS faces limitations due to the challenges of pretraining robust speech codecs and the quality degradation introduced by quantization errors. Emerging evidence suggests that continuous-valued generative models can alleviate these issues and serve as a promising alternative. Yet, effectively modelling diverse speech patterns and developing reliable sampling strategies for continuous-valued autoregressive (AR) TTS remains underexplored. In this work, we propose BELLE, Bayesian evidential learning with language modelling for TTS, a novel continuous-valued AR framework that directly predicts mel-spectrograms from textual input. BELLE treats each mel-spectrogram frame as a Gaussian distribution sampled from a learned hyper distribution, enabling principled uncertainty estimation, particularly in scenarios with parallel data (i.e., one text-audio prompt paired with multiple speech samples). To obtain such data, diverse speech samples are synthesized using multiple pre-trained TTS models given the same text-audio prompts, which are distilled into BELLE via Bayesian evidential learning. Experimental results indicate that BELLE demonstrates highly competitive performance compared with the current best open-source TTS models, even though BELLE is trained on a large amount of synthetic data and uses only approximately one-tenth of their training data. Audio samples generated by BELLE are available at https://belletts.github.io/Belle/. The code, checkpoints, and synthetic data will be released after the paper is accepted.