π€ AI Summary
To address the trade-off between naturalness and computational efficiency in text-to-speech synthesis, this paper proposes a continuous autoregressive speech synthesis framework. Methodologically, it abandons discrete quantization (e.g., RVQ) and instead employs a variational autoencoder to learn continuous latent representations; models the conditional distribution using a Gaussian mixture model; and introduces a differentiable, stochastic monotonic alignment mechanism to ensure strict temporal alignment. Key contributions include: (i) the first continuous-latent-space autoregressive paradigm for speech synthesis; (ii) a parameter countδ»
δΈΊ 10.3% that of VALL-E; and (iii) improved training stability and inference efficiency. Experiments demonstrate consistent superiority over VALL-E in both subjective MOS scores and multiple objective metrics (e.g., MCD, F0 RMSE, and duration error). Audio samples and code are publicly released for reproducibility.
π Abstract
We propose a novel autoregressive modeling approach for speech synthesis, combining a variational autoencoder (VAE) with a multi-modal latent space and an autoregressive model that uses Gaussian Mixture Models (GMM) as the conditional probability distribution. Unlike previous methods that rely on residual vector quantization, our model leverages continuous speech representations from the VAE's latent space, greatly simplifying the training and inference pipelines. We also introduce a stochastic monotonic alignment mechanism to enforce strict monotonic alignments. Our approach significantly outperforms the state-of-the-art autoregressive model VALL-E in both subjective and objective evaluations, achieving these results with only 10.3% of VALL-E's parameters. This demonstrates the potential of continuous speech language models as a more efficient alternative to existing quantization-based speech language models. Sample audio can be found at https://tinyurl.com/gmm-lm-tts.