MAGE: Model-Level Graph Neural Networks Explanations via Motif-based Graph Generation

📅 2024-05-21
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of graph neural networks (GNNs) in molecular tasks—and the inability of conventional methods (e.g., XGNN, GNNInterpreter) to identify critical cyclic chemical substructures—this paper proposes the first model-level explanation framework that treats chemically meaningful motifs as atomic generative units. Our approach comprises three key components: motif decomposition, attention-driven category-specific motif identification, and motif-guided conditional graph generation—ensuring explanations are both chemically valid and human-readable. Extensive experiments across six real-world molecular datasets demonstrate that all generated explanations strictly satisfy chemical validity constraints (100% compliance). Moreover, our method achieves statistically significant improvements over baselines in fidelity, novelty, and human evaluation metrics. To our knowledge, this is the first work to deliver structurally sound, semantically transparent, and empirically verifiable GNN explanations at the molecular level.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have shown remarkable success in molecular tasks, yet their interpretability remains challenging. Traditional model-level explanation methods like XGNN and GNNInterpreter often fail to identify valid substructures like rings, leading to questionable interpretability. This limitation stems from XGNN's atom-by-atom approach and GNNInterpreter's reliance on average graph embeddings, which overlook the essential structural elements crucial for molecules. To address these gaps, we introduce an innovative extbf{M}otif-b extbf{A}sed extbf{G}NN extbf{E}xplainer (MAGE) that uses motifs as fundamental units for generating explanations. Our approach begins with extracting potential motifs through a motif decomposition technique. Then, we utilize an attention-based learning method to identify class-specific motifs. Finally, we employ a motif-based graph generator for each class to create molecular graph explanations based on these class-specific motifs. This novel method not only incorporates critical substructures into the explanations but also guarantees their validity, yielding results that are human-understandable. Our proposed method's effectiveness is demonstrated through quantitative and qualitative assessments conducted on six real-world molecular datasets.
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability of Graph Neural Networks for molecular tasks
Addressing limitations in identifying valid substructures like rings
Generating human-understandable explanations using motif-based approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses motifs as fundamental explanation units
Employs attention-based class-specific motif identification
Generates valid molecular graphs via motifs
🔎 Similar Papers
No similar papers found.