🤖 AI Summary
Current large language models (LLMs) exhibit limited performance on Music Question Answering (MQA), primarily due to sparse musical knowledge in pretraining corpora and the absence of factual, context-aware benchmarks integrating artist metadata and historical background. To address this, we introduce MusWikiDB and ArtistMus—the first structured, globally representative MQA benchmark designed for retrieval-augmented reasoning over diverse artists. Methodologically, we construct a vector database unifying structured metadata, historical context, and multimodal music information, and fine-tune open-source LLMs within a Retrieval-Augmented Generation (RAG) framework. Experimental results demonstrate that RAG boosts open-source model accuracy by up to 56.8 percentage points. MusWikiDB achieves competitive precision and retrieval efficiency relative to commercial models, while significantly improving factual consistency and cross-domain generalization—establishing a new standard for rigorous, knowledge-grounded music understanding.
📝 Abstract
Recent advances in large language models (LLMs) have transformed open-domain question answering, yet their effectiveness in music-related reasoning remains limited due to sparse music knowledge in pretraining data. While music information retrieval and computational musicology have explored structured and multimodal understanding, few resources support factual and contextual music question answering (MQA) grounded in artist metadata or historical context. We introduce MusWikiDB, a vector database of 3.2M passages from 144K music-related Wikipedia pages, and ArtistMus, a benchmark of 1,000 questions on 500 diverse artists with metadata such as genre, debut year, and topic. These resources enable systematic evaluation of retrieval-augmented generation (RAG) for MQA. Experiments show that RAG markedly improves factual accuracy; open-source models gain up to +56.8 percentage points (for example, Qwen3 8B improves from 35.0 to 91.8), approaching proprietary model performance. RAG-style fine-tuning further boosts both factual recall and contextual reasoning, improving results on both in-domain and out-of-domain benchmarks. MusWikiDB also yields approximately 6 percentage points higher accuracy and 40% faster retrieval than a general-purpose Wikipedia corpus. We release MusWikiDB and ArtistMus to advance research in music information retrieval and domain-specific question answering, establishing a foundation for retrieval-augmented reasoning in culturally rich domains such as music.