🤖 AI Summary
This work addresses the absence of effective evaluation benchmarks for deep musical understanding in current large audio language models. To this end, we introduce HumMusQA, the first high-quality question-answering dataset meticulously constructed and validated by music experts, comprising 320 structured questions designed to rigorously assess a model’s genuine capacity for musical perception and interpretation while probing its robustness against unimodal shortcuts. Emphasizing deep human involvement to capture complex audio semantics, HumMusQA enables systematic evaluation across state-of-the-art models, revealing significant limitations in their ability to perform authentic music understanding tasks and exposing their reliance on non-semantic cues.
📝 Abstract
The evaluation of music understanding in Large Audio-Language Models (LALMs) requires a rigorously defined benchmark that truly tests whether models can perceive and interpret music, a standard that current data methodologies frequently fail to meet. This paper introduces a meticulously structured approach to music evaluation, proposing a new dataset of 320 hand-written questions curated and validated by experts with musical training, arguing that such focused, manual curation is superior for probing complex audio comprehension. To demonstrate the use of the dataset, we benchmark six state-of-the-art LALMs and additionally test their robustness to uni-modal shortcuts.