MUSE-Explainer: Counterfactual Explanations for Symbolic Music Graph Classification Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) for symbolic music analysis suffer from poor interpretability, hindering their trustworthy deployment. Method: We propose a music-structure-aware counterfactual explanation framework that generates minimal, syntactically valid, and musically coherent perturbations to induce verifiable prediction changes. Our approach integrates domain-specific musical priors—including pitch, rhythm, and harmonic constraints—and employs a graph-structure-aware perturbation mechanism to ensure logical musicality. Explanations are visualized using standard tools (e.g., Verovio). Contribution/Results: Evaluated on multiple symbolic music classification tasks, our method produces intuitive, human-readable, and empirically verifiable explanations. It significantly enhances model decision transparency and establishes the first GNN explanation framework for symbolic music that jointly satisfies formal rigor and domain-specific practicality.

Technology Category

Application Category

📝 Abstract
Interpretability is essential for deploying deep learning models in symbolic music analysis, yet most research emphasizes model performance over explanation. To address this, we introduce MUSE-Explainer, a new method that helps reveal how music Graph Neural Network models make decisions by providing clear, human-friendly explanations. Our approach generates counterfactual explanations by making small, meaningful changes to musical score graphs that alter a model's prediction while ensuring the results remain musically coherent. Unlike existing methods, MUSE-Explainer tailors its explanations to the structure of musical data and avoids unrealistic or confusing outputs. We evaluate our method on a music analysis task and show it offers intuitive insights that can be visualized with standard music tools such as Verovio.
Problem

Research questions and friction points this paper is trying to address.

Generates counterfactual explanations for music graph classification models
Ensures explanations remain musically coherent through meaningful graph modifications
Provides human-interpretable insights visualizable with standard music tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates counterfactual explanations for music graph models
Modifies musical score graphs to alter model predictions
Ensures explanations remain musically coherent and realistic
🔎 Similar Papers
No similar papers found.
B
Baptiste Hilaire
Institute of Computational Perception, Johannes Kepler University Linz, Austria
E
Emmanouil Karystinaios
Institute of Computational Perception, Johannes Kepler University Linz, Austria
Gerhard Widmer
Gerhard Widmer
Professor of Computer Science, Johannes Kepler University Linz
Artificial IntelligenceMachine LearningSound and Music ComputingMusic Information RetrievalComputational Musicology