🤖 AI Summary
This study identifies pervasive cultural and genre biases in music AI systems, particularly misrepresenting and distorting marginalized musical traditions—such as Indian rāga—from the Global South, thereby eroding creator trust, constraining creative expression, and exacerbating cultural erasure. To address this, we propose a three-tier fairness-enhancement framework spanning data curation, model design, and human–AI interaction, integrating critical AI analysis, cross-cultural musicology, participatory data governance, and inclusive interface design. Our key contribution is the first systematic deconstruction of bias propagation pathways across the AI development lifecycle, coupled with deep contextual embedding of cultural knowledge at every stage. Empirical evaluation demonstrates significant improvements in both musical representation accuracy and cultural sensitivity. The framework provides a transferable, methodology-driven foundation for developing transparent, trustworthy, and pluralistic music AI systems. (149 words)
📝 Abstract
In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators'trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.