๐ค AI Summary
This study addresses the lack of a unified and efficient benchmark for evaluating large modelsโ ability to generate descriptions across the three major audio domains: sound, music, and speech. The authors present the first comprehensive evaluation benchmark encompassing all three categories, comprising 1,000 carefully curated samples, and introduce a fine-grained assessment framework along three orthogonal dimensions: accuracy, completeness, and hallucination. Combining traditional metrics (METEOR, BLEU, ROUGE-L) with an LLM-as-Judge approach based on large language models, they conduct an automated yet human-aligned evaluation of 13 prominent multimodal models, including OpenAIโs and Gemini series. Results indicate that the Gemini series achieves overall superior performance (Gemini 3 Pro scores 6.00/10), while OpenAI models exhibit fewer hallucinations; all models perform best on speech-related tasks and struggle most with music.
๐ Abstract
We introduce AudioCapBench, a benchmark for evaluating audio captioning capabilities of large multimodal models. \method covers three distinct audio domains, including environmental sound, music, and speech, with 1,000 curated evaluation samples drawn from established datasets. We evaluate 13 models across two providers (OpenAI, Google Gemini) using both reference-based metrics (METEOR, BLEU, ROUGE-L) and an LLM-as-Judge framework that scores predictions on three orthogonal dimensions: \textit{accuracy} (semantic correctness), \textit{completeness} (coverage of reference content), and \textit{hallucination} (absence of fabricated content). Our results reveal that Gemini models generally outperform OpenAI models on overall captioning quality, with Gemini~3~Pro achieving the highest overall score (6.00/10), while OpenAI models exhibit lower hallucination rates. All models perform best on speech captioning and worst on music captioning. We release the benchmark as well as evaluation code to facilitate reproducible audio understanding research.