EmergentTTS-Eval: Evaluating TTS Models on Complex Prosodic, Expressiveness, and Linguistic Challenges Using Model-as-a-Judge

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing TTS benchmarks inadequately assess models’ capacity to capture fine-grained semantic and prosodic distinctions—such as emotion, paralinguistic features, loanwords, syntactic complexity, atypical pronunciations (e.g., URLs, mathematical expressions), and interrogative intonation. To address this, we introduce the first fine-grained TTS evaluation benchmark covering these six core challenges. Our method pioneers a “model-as-judge” paradigm: leveraging LLMs to automatically generate 1,645 diverse test cases, and employing a large audio-language model (LALM) to deliver multidimensional automatic scoring across emotion, prosody, intonation, and pronunciation. This approach achieves high human-machine agreement (Spearman’s ρ > 0.85) and demonstrates robust discriminative power across state-of-the-art systems—including 11Labs, Deepgram, and OpenAI o1-mini-TTS—precisely exposing subtle performance differences. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Text-to-Speech (TTS) benchmarks often fail to capture how well models handle nuanced and semantically complex text. Building on $ extit{EmergentTTS}$, we introduce $ extit{EmergentTTS-Eval}$, a comprehensive benchmark covering six challenging TTS scenarios: emotions, paralinguistics, foreign words, syntactic complexity, complex pronunciation (e.g. URLs, formulas), and questions. Crucially, our framework automates both test-case generation and evaluation, making the benchmark easily extensible. Starting from a small set of human-written seed prompts, we iteratively extend them using LLMs to target specific structural, phonetic and prosodic challenges, resulting in 1,645 diverse test cases. Moreover, we employ a model-as-a-judge approach, using a Large Audio Language Model (LALM) to assess the speech across multiple dimensions such as expressed emotion, prosodic, intonational, and pronunciation accuracy. We evaluate state-of-the-art open-source and proprietary TTS systems, such as 11Labs, Deepgram, and OpenAI's 4o-mini-TTS, on EmergentTTS-Eval, demonstrating its ability to reveal fine-grained performance differences. Results show that the model-as-a-judge approach offers robust TTS assessment and a high correlation with human preferences. We open source the evaluation $href{https://github.com/boson-ai/EmergentTTS-Eval-public}{code}$ and the $href{https://huggingface.co/datasets/bosonai/EmergentTTS-Eval}{dataset}$.
Problem

Research questions and friction points this paper is trying to address.

Evaluating TTS models on complex prosodic and expressive challenges
Automating test-case generation and evaluation for nuanced TTS performance
Assessing TTS systems using model-as-a-judge for fine-grained differences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated test-case generation using LLMs
Model-as-a-judge approach with LALM
Comprehensive benchmark for diverse TTS scenarios
🔎 Similar Papers
No similar papers found.
R
R. Manku
Boson AI, Santa Clara, CA 95054
Y
Yuzhi Tang
Boson AI, Santa Clara, CA 95054
Xingjian Shi
Xingjian Shi
OpenAI
Deep LearningComputer VisionNatural Language ProcessingMultimodalSpeech
M
Mu Li
Boson AI, Santa Clara, CA 95054
Alex Smola
Alex Smola
Boson AI
Machine LearningDeep LearningSystems