🤖 AI Summary
This work proposes a method for synthesizing controllable Lombard-effect speech for any speaker without requiring Lombard training samples. By learning style embeddings from large-scale, prosodically diverse data and applying principal component analysis (PCA) to identify key directions associated with Lombard characteristics, the approach enables precise manipulation of the style embedding. The method supports fine-grained control over both Lombard intensity and prosody, significantly enhancing speech intelligibility in noisy environments while preserving naturalness and speaker identity. To the best of our knowledge, this is the first unsupervised, cross-speaker framework capable of generating controllable Lombard speech without any Lombard-labeled data.
📝 Abstract
The Lombard effect plays a key role in natural communication, particularly in noisy environments or when addressing hearing-impaired listeners. We present a controllable text-to-speech (TTS) system capable of synthesizing Lombard speech for any speaker without requiring explicit Lombard data during training. Our approach leverages style embeddings learned from a large, prosodically diverse dataset and analyzes their correlation with Lombard attributes using principal component analysis (PCA). By shifting the relevant PCA components, we manipulate the style embeddings and incorporate them into our TTS model to generate speech at desired Lombard levels. Evaluations demonstrate that our method preserves naturalness and speaker identity, enhances intelligibility under noise, and provides fine-grained control over prosody, offering a robust solution for controllable Lombard TTS for any speaker.