🤖 AI Summary
Current audio-language models significantly lag behind humans in open-domain, fine-grained audio understanding, primarily due to insufficient discriminative power in existing benchmarks—coarse annotations and evaluation metrics that fail to capture nuanced differences in generated outputs.
Method: We propose MECAT, a Multi-Expert Collaborative Audio Benchmark, and DATE, a novel evaluation metric. MECAT integrates domain-specific expert models with large language model–driven chain-of-thought reasoning to generate multi-perspective, high-fidelity audio descriptions and open-ended question-answer pairs. DATE quantifies output specificity and discriminability via semantic similarity modeling and cross-sample separability analysis.
Contribution/Results: Experiments demonstrate that our framework substantially enhances evaluation sensitivity and reliability. For the first time, it systematically uncovers critical capability bottlenecks of mainstream models across fine-grained dimensions—including timbre, temporal structure, and causal reasoning—establishing a foundational assessment infrastructure for advancing audio-language models toward human-level auditory cognition.
📝 Abstract
While large audio-language models have advanced open-ended audio understanding, they still fall short of nuanced human-level comprehension. This gap persists largely because current benchmarks, limited by data annotations and evaluation metrics, fail to reliably distinguish between generic and highly detailed model outputs. To this end, this work introduces MECAT, a Multi-Expert Constructed Benchmark for Fine-Grained Audio Understanding Tasks. Generated via a pipeline that integrates analysis from specialized expert models with Chain-of-Thought large language model reasoning, MECAT provides multi-perspective, fine-grained captions and open-set question-answering pairs. The benchmark is complemented by a novel metric: DATE (Discriminative-Enhanced Audio Text Evaluation). This metric penalizes generic terms and rewards detailed descriptions by combining single-sample semantic similarity with cross-sample discriminability. A comprehensive evaluation of state-of-the-art audio models is also presented, providing new insights into their current capabilities and limitations. The data and code are available at https://github.com/xiaomi-research/mecat