🤖 AI Summary
Graph neural networks (GNNs) for symbolic music analysis suffer from poor interpretability, hindering their trustworthy deployment. Method: We propose a music-structure-aware counterfactual explanation framework that generates minimal, syntactically valid, and musically coherent perturbations to induce verifiable prediction changes. Our approach integrates domain-specific musical priors—including pitch, rhythm, and harmonic constraints—and employs a graph-structure-aware perturbation mechanism to ensure logical musicality. Explanations are visualized using standard tools (e.g., Verovio). Contribution/Results: Evaluated on multiple symbolic music classification tasks, our method produces intuitive, human-readable, and empirically verifiable explanations. It significantly enhances model decision transparency and establishes the first GNN explanation framework for symbolic music that jointly satisfies formal rigor and domain-specific practicality.
📝 Abstract
Interpretability is essential for deploying deep learning models in symbolic music analysis, yet most research emphasizes model performance over explanation. To address this, we introduce MUSE-Explainer, a new method that helps reveal how music Graph Neural Network models make decisions by providing clear, human-friendly explanations. Our approach generates counterfactual explanations by making small, meaningful changes to musical score graphs that alter a model's prediction while ensuring the results remain musically coherent. Unlike existing methods, MUSE-Explainer tailors its explanations to the structure of musical data and avoids unrealistic or confusing outputs. We evaluate our method on a music analysis task and show it offers intuitive insights that can be visualized with standard music tools such as Verovio.