Explaining GNN Explanations with Edge Gradients

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GNN interpretability methods exhibit inconsistent performance across complex architectures and tasks, lacking a unified theoretical foundation. To address this, we establish, for the first time, a rigorous theoretical connection between perturbation-based and gradient-based methods, introducing *edge gradients* as a unifying explanatory tool: their sign approximates GNNExplainer’s output, and in linear GNNs, they are equivalent to edge masking search. Our framework integrates edge gradient computation, perturbation-based evaluation, gradient backpropagation, and computational graph subgraph extraction. We systematically validate it on both synthetic and real-world graph datasets. Experiments demonstrate that edge gradients significantly improve explanation reliability and cross-model consistency—offering a novel interpretability paradigm for GNNs that balances theoretical rigor with practical effectiveness.

Technology Category

Application Category

📝 Abstract
In recent years, the remarkable success of graph neural networks (GNNs) on graph-structured data has prompted a surge of methods for explaining GNN predictions. However, the state-of-the-art for GNN explainability remains in flux. Different comparisons find mixed results for different methods, with many explainers struggling on more complex GNN architectures and tasks. This presents an urgent need for a more careful theoretical analysis of competing GNN explanation methods. In this work we take a closer look at GNN explanations in two different settings: input-level explanations, which produce explanatory subgraphs of the input graph, and layerwise explanations, which produce explanatory subgraphs of the computation graph. We establish the first theoretical connections between the popular perturbation-based and classical gradient-based methods, as well as point out connections between other recently proposed methods. At the input level, we demonstrate conditions under which GNNExplainer can be approximated by a simple heuristic based on the sign of the edge gradients. In the layerwise setting, we point out that edge gradients are equivalent to occlusion search for linear GNNs. Finally, we demonstrate how our theoretical results manifest in practice with experiments on both synthetic and real datasets.
Problem

Research questions and friction points this paper is trying to address.

Analyzing theoretical connections between GNN explanation methods
Comparing input-level and layerwise GNN explanation approaches
Validating edge gradient-based heuristics for GNN explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing GNN explanations using edge gradients
Connecting perturbation-based and gradient-based methods
Validating theory with synthetic and real datasets
🔎 Similar Papers
No similar papers found.