🤖 AI Summary
This work addresses the insufficient fusion of acoustic and textual information in prosody modeling and its poor generalizability to downstream TTS tasks. We propose a standalone end-to-end prosody prediction framework. Methodologically, we design a masked joint encoder that aligns and fuses partially masked acoustic features (e.g., mel-spectrograms) with text sequences to learn fixed-dimensional latent prosodic representations; an encoder-decoder architecture then performs multi-granularity, frame-level prediction of F0 and energy contours. Evaluated on the GigaSpeech dataset, our approach significantly outperforms baselines—including style encoding—in prosody prediction accuracy. When integrated into a TTS system, it improves synthetic speech naturalness and subjective quality, yielding a MOS gain of over 0.3. The framework establishes a new paradigm for disentangled and transferable prosody modeling, enabling robust cross-task prosodic representation learning.
📝 Abstract
Prosody conveys rich emotional and semantic information of the speech signal as well as individual idiosyncrasies. We propose a stand-alone model that maps text-to-prosodic features such as F0 and energy and can be used in downstream tasks such as TTS. The ProMode encoder takes as input acoustic features and time-aligned textual content, both are partially masked, and obtains a fixed-length latent prosodic embedding. The decoder predicts acoustics in the masked region using both the encoded prosody input and unmasked textual content. Trained on the GigaSpeech dataset, we compare our method with state-of-the-art style encoders. For F0 and energy predictions, we show consistent improvements for our model at different levels of granularity. We also integrate these predicted prosodic features into a TTS system and conduct perceptual tests, which show higher prosody preference compared to the baselines, demonstrating the model's potential in tasks where prosody modeling is important.