ArtistMus: A Globally Diverse, Artist-Centric Benchmark for Retrieval-Augmented Music Question Answering

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) exhibit limited performance on Music Question Answering (MQA), primarily due to sparse musical knowledge in pretraining corpora and the absence of factual, context-aware benchmarks integrating artist metadata and historical background. To address this, we introduce MusWikiDB and ArtistMus—the first structured, globally representative MQA benchmark designed for retrieval-augmented reasoning over diverse artists. Methodologically, we construct a vector database unifying structured metadata, historical context, and multimodal music information, and fine-tune open-source LLMs within a Retrieval-Augmented Generation (RAG) framework. Experimental results demonstrate that RAG boosts open-source model accuracy by up to 56.8 percentage points. MusWikiDB achieves competitive precision and retrieval efficiency relative to commercial models, while significantly improving factual consistency and cross-domain generalization—establishing a new standard for rigorous, knowledge-grounded music understanding.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have transformed open-domain question answering, yet their effectiveness in music-related reasoning remains limited due to sparse music knowledge in pretraining data. While music information retrieval and computational musicology have explored structured and multimodal understanding, few resources support factual and contextual music question answering (MQA) grounded in artist metadata or historical context. We introduce MusWikiDB, a vector database of 3.2M passages from 144K music-related Wikipedia pages, and ArtistMus, a benchmark of 1,000 questions on 500 diverse artists with metadata such as genre, debut year, and topic. These resources enable systematic evaluation of retrieval-augmented generation (RAG) for MQA. Experiments show that RAG markedly improves factual accuracy; open-source models gain up to +56.8 percentage points (for example, Qwen3 8B improves from 35.0 to 91.8), approaching proprietary model performance. RAG-style fine-tuning further boosts both factual recall and contextual reasoning, improving results on both in-domain and out-of-domain benchmarks. MusWikiDB also yields approximately 6 percentage points higher accuracy and 40% faster retrieval than a general-purpose Wikipedia corpus. We release MusWikiDB and ArtistMus to advance research in music information retrieval and domain-specific question answering, establishing a foundation for retrieval-augmented reasoning in culturally rich domains such as music.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited music knowledge in LLMs for factual question answering.
Provides a benchmark for evaluating retrieval-augmented music QA systems.
Enhances accuracy and speed in music information retrieval tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created MusWikiDB vector database from Wikipedia music pages
Introduced ArtistMus benchmark for retrieval-augmented music QA
Used RAG to boost accuracy and contextual reasoning in models
🔎 Similar Papers
No similar papers found.
D
Daeyong Kwon
Graduate School of Culture Technology, KAIST
S
SeungHeon Doh
Graduate School of Culture Technology, KAIST
Juhan Nam
Juhan Nam
KAIST
Music TechnologyMusic Information RetrievalAudio Signal ProcessingMusic Processing