🤖 AI Summary
In expressive speech synthesis, explicit parametric prosody modeling offers interpretability and controllability but fails to capture the natural variability of human speech. This paper proposes a novel stochastic generative paradigm for prosody synthesis, systematically introducing three probabilistic modeling approaches—normalizing flows, conditional flow matching, and rectified flows—to explicitly learn the joint distribution of pitch, energy, and duration. Fine-grained prosody control is achieved via temperature scaling. Experiments demonstrate that the method achieves MOS 4.21—matching natural speech in subjective naturalness—and significantly outperforms deterministic baselines. Objective metrics—including F0 dynamic range and prosodic diversity score—improve by 15–32%. Moreover, the framework enables multi-granularity, disentangled prosody editing. This work establishes a scalable, probabilistic modeling paradigm for highly expressive and controllable text-to-speech synthesis.
📝 Abstract
While generative methods have progressed rapidly in recent years, generating expressive prosody for an utterance remains a challenging task in text-to-speech synthesis. This is particularly true for systems that model prosody explicitly through parameters such as pitch, energy, and duration, which is commonly done for the sake of interpretability and controllability. In this work, we investigate the effectiveness of stochastic methods for this task, including Normalizing Flows, Conditional Flow Matching, and Rectified Flows. We compare these methods to a traditional deterministic baseline, as well as to real human realizations. Our extensive subjective and objective evaluations demonstrate that stochastic methods produce natural prosody on par with human speakers by capturing the variability inherent in human speech. Further, they open up additional controllability options by allowing the sampling temperature to be tuned.