🤖 AI Summary
Existing TTS benchmarks inadequately assess models’ capacity to capture fine-grained semantic and prosodic distinctions—such as emotion, paralinguistic features, loanwords, syntactic complexity, atypical pronunciations (e.g., URLs, mathematical expressions), and interrogative intonation. To address this, we introduce the first fine-grained TTS evaluation benchmark covering these six core challenges. Our method pioneers a “model-as-judge” paradigm: leveraging LLMs to automatically generate 1,645 diverse test cases, and employing a large audio-language model (LALM) to deliver multidimensional automatic scoring across emotion, prosody, intonation, and pronunciation. This approach achieves high human-machine agreement (Spearman’s ρ > 0.85) and demonstrates robust discriminative power across state-of-the-art systems—including 11Labs, Deepgram, and OpenAI o1-mini-TTS—precisely exposing subtle performance differences. All code and data are publicly released.
📝 Abstract
Text-to-Speech (TTS) benchmarks often fail to capture how well models handle nuanced and semantically complex text. Building on $ extit{EmergentTTS}$, we introduce $ extit{EmergentTTS-Eval}$, a comprehensive benchmark covering six challenging TTS scenarios: emotions, paralinguistics, foreign words, syntactic complexity, complex pronunciation (e.g. URLs, formulas), and questions. Crucially, our framework automates both test-case generation and evaluation, making the benchmark easily extensible. Starting from a small set of human-written seed prompts, we iteratively extend them using LLMs to target specific structural, phonetic and prosodic challenges, resulting in 1,645 diverse test cases. Moreover, we employ a model-as-a-judge approach, using a Large Audio Language Model (LALM) to assess the speech across multiple dimensions such as expressed emotion, prosodic, intonational, and pronunciation accuracy. We evaluate state-of-the-art open-source and proprietary TTS systems, such as 11Labs, Deepgram, and OpenAI's 4o-mini-TTS, on EmergentTTS-Eval, demonstrating its ability to reveal fine-grained performance differences. Results show that the model-as-a-judge approach offers robust TTS assessment and a high correlation with human preferences. We open source the evaluation $href{https://github.com/boson-ai/EmergentTTS-Eval-public}{code}$ and the $href{https://huggingface.co/datasets/bosonai/EmergentTTS-Eval}{dataset}$.