Is Meta-Path Attention an Explanation? Evidence of Alignment and Decoupling in Heterogeneous GNNs

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether meta-path attention in heterogeneous graph neural networks genuinely reflects the actual contribution of meta-paths and under what conditions it becomes decoupled from their true influence. To this end, the authors propose MetaXplain, a meta-path-aware explanation framework that requires no model modification and integrates gradient-based, perturbation-based, and Shapley-value-based explainers through view decomposition, pattern-compliant perturbations, and fusion-aware attribution for post-hoc analysis. The study presents the first systematic evidence that meta-path attention can both align with and significantly diverge from explanation outcomes, introducing the MP-AEA metric to quantify this consistency. Experiments demonstrate that MetaXplain outperforms random baselines, and retraining on explanation-derived subgraphs effectively denoises inputs, yielding performance improvements or preservation for HAN and HAN-GCN on ACM, DBLP, and IMDB benchmarks.

Technology Category

Application Category

📝 Abstract
Meta-path-based heterogeneous graph neural networks aggregate over meta-path-induced views, and their semantic-level attention over meta-path channels is widely used as a narrative for ``which semantics matter.''We study this assumption empirically by asking: when does meta-path attention reflect meta-path importance, and when can it decouple? A key challenge is that most post-hoc GNN explainers are designed for homogeneous graphs, and naive adaptations to heterogeneous neighborhoods can mix semantics and confound perturbations. To enable a controlled empirical analysis, we introduce MetaXplain, a meta-path-aware post-hoc explanation protocol that applies existing explainers in the native meta-path view domain via (i) view-factorized explanations, (ii) schema-valid channel-wise perturbations, and (iii) fusion-aware attribution, without modifying the underlying predictor. We benchmark representative gradient-, perturbation-, and Shapley-style explainers on ACM, DBLP, and IMDB with HAN and HAN-GCN, comparing against xPath and type-matched random baselines under standard faithfulness metrics. To quantify attention reliability, we propose Meta-Path Attention--Explanation Alignment (MP-AEA), which measures rank correlation between learned attention weights and explanation-derived meta-path contribution scores across random runs. Our results show that meta-path-aware explanations typically outperform random controls, while MP-AEA reveals both high-alignment and statistically significant decoupling regimes depending on the dataset and backbone; moreover, retraining on explanation-induced subgraphs often preserves, and in some noisy regimes improves, predictive performance, suggesting an explanation-as-denoising effect.
Problem

Research questions and friction points this paper is trying to address.

meta-path attention
heterogeneous GNNs
explanation alignment
attention decoupling
GNN explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

MetaXplain
heterogeneous GNNs
meta-path attention
post-hoc explanation
MP-AEA
🔎 Similar Papers
No similar papers found.
M
Maiqi Jiang
William & Mary
N
Noman Ali
Indian Institute of Technology
Yiran Ding
Yiran Ding
HDU
LLMMLSys
Yanfu Zhang
Yanfu Zhang
William&Mary