🤖 AI Summary
Current text-to-audio (TTA) evaluation overemphasizes perceptual quality while neglecting robustness, generalization, and ethical risks. To address this gap, we propose the first comprehensive, three-dimensional evaluation framework—centered on functional performance, reliability, and social responsibility—and introduce a seven-dimensional assessment taxonomy encompassing accuracy, fairness, toxicity, and other critical dimensions. Our framework integrates over 118,000 human annotations with diverse automated metrics. Using 2,999 diverse prompts—generated via human–AI collaboration—and a dual-tier expert–crowd evaluation protocol, we systematically benchmark 10 state-of-the-art TTA models, revealing for the first time their real-world capability boundaries and bias patterns. We publicly release our dataset, evaluation tools, and protocols to establish a rigorous, reproducible benchmark for developing trustworthy, equitable, and socially responsible TTA systems.
📝 Abstract
Text-to-Audio (TTA) generation has made rapid progress, but current evaluation methods remain narrow, focusing mainly on perceptual quality while overlooking robustness, generalization, and ethical concerns. We present TTA-Bench, a comprehensive benchmark for evaluating TTA models across functional performance, reliability, and social responsibility. It covers seven dimensions including accuracy, robustness, fairness, and toxicity, and includes 2,999 diverse prompts generated through automated and manual methods. We introduce a unified evaluation protocol that combines objective metrics with over 118,000 human annotations from both experts and general users. Ten state-of-the-art models are benchmarked under this framework, offering detailed insights into their strengths and limitations. TTA-Bench establishes a new standard for holistic and responsible evaluation of TTA systems. The dataset and evaluation tools are open-sourced at https://nku-hlt.github.io/tta-bench/.