HumMusQA: A Human-written Music Understanding QA Benchmark Dataset

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of effective evaluation benchmarks for deep musical understanding in current large audio language models. To this end, we introduce HumMusQA, the first high-quality question-answering dataset meticulously constructed and validated by music experts, comprising 320 structured questions designed to rigorously assess a model’s genuine capacity for musical perception and interpretation while probing its robustness against unimodal shortcuts. Emphasizing deep human involvement to capture complex audio semantics, HumMusQA enables systematic evaluation across state-of-the-art models, revealing significant limitations in their ability to perform authentic music understanding tasks and exposing their reliance on non-semantic cues.
📝 Abstract
The evaluation of music understanding in Large Audio-Language Models (LALMs) requires a rigorously defined benchmark that truly tests whether models can perceive and interpret music, a standard that current data methodologies frequently fail to meet. This paper introduces a meticulously structured approach to music evaluation, proposing a new dataset of 320 hand-written questions curated and validated by experts with musical training, arguing that such focused, manual curation is superior for probing complex audio comprehension. To demonstrate the use of the dataset, we benchmark six state-of-the-art LALMs and additionally test their robustness to uni-modal shortcuts.
Problem

Research questions and friction points this paper is trying to address.

music understanding
Large Audio-Language Models
benchmark dataset
audio comprehension
human-written QA
Innovation

Methods, ideas, or system contributions that make the work stand out.

music understanding
human-written QA
audio-language models
benchmark dataset
expert curation
🔎 Similar Papers
No similar papers found.