TTA-Bench: A Comprehensive Benchmark for Evaluating Text-to-Audio Models

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-audio (TTA) evaluation overemphasizes perceptual quality while neglecting robustness, generalization, and ethical risks. To address this gap, we propose the first comprehensive, three-dimensional evaluation framework—centered on functional performance, reliability, and social responsibility—and introduce a seven-dimensional assessment taxonomy encompassing accuracy, fairness, toxicity, and other critical dimensions. Our framework integrates over 118,000 human annotations with diverse automated metrics. Using 2,999 diverse prompts—generated via human–AI collaboration—and a dual-tier expert–crowd evaluation protocol, we systematically benchmark 10 state-of-the-art TTA models, revealing for the first time their real-world capability boundaries and bias patterns. We publicly release our dataset, evaluation tools, and protocols to establish a rigorous, reproducible benchmark for developing trustworthy, equitable, and socially responsible TTA systems.

Technology Category

Application Category

📝 Abstract
Text-to-Audio (TTA) generation has made rapid progress, but current evaluation methods remain narrow, focusing mainly on perceptual quality while overlooking robustness, generalization, and ethical concerns. We present TTA-Bench, a comprehensive benchmark for evaluating TTA models across functional performance, reliability, and social responsibility. It covers seven dimensions including accuracy, robustness, fairness, and toxicity, and includes 2,999 diverse prompts generated through automated and manual methods. We introduce a unified evaluation protocol that combines objective metrics with over 118,000 human annotations from both experts and general users. Ten state-of-the-art models are benchmarked under this framework, offering detailed insights into their strengths and limitations. TTA-Bench establishes a new standard for holistic and responsible evaluation of TTA systems. The dataset and evaluation tools are open-sourced at https://nku-hlt.github.io/tta-bench/.
Problem

Research questions and friction points this paper is trying to address.

Evaluating text-to-audio models beyond perceptual quality
Assessing robustness, generalization and ethical concerns
Establishing comprehensive benchmark for holistic evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive benchmark for Text-to-Audio model evaluation
Unified protocol combining objective metrics and human annotations
Open-sourced dataset with 2,999 diverse evaluation prompts
🔎 Similar Papers
No similar papers found.
H
Hui Wang
College of Computer Science, Nankai University, Tianjin, China
C
Cheng Liu
College of Computer Science, Nankai University, Tianjin, China
J
Junyang Chen
College of Computer Science, Nankai University, Tianjin, China
H
Haoze Liu
College of Computer Science, Nankai University, Tianjin, China
Y
Yuhang Jia
College of Computer Science, Nankai University, Tianjin, China
Shiwan Zhao
Shiwan Zhao
Independent Researcher, Research Scientist of IBM Research - China (2000-2020)
AGILarge Language ModelNLPSpeechRecommeder System
J
Jiaming Zhou
College of Computer Science, Nankai University, Tianjin, China
Haoqin Sun
Haoqin Sun
Nankai University
Affective computingSpeech signal processingAudio understanding
Hui Bu
Hui Bu
aishell
Speech Recognition、Speech databases and text corpora、Special topics on speech databases and
Y
Yong Qin
College of Computer Science, Nankai University, Tianjin, China