Discrete Diffusion-Based Model-Level Explanation of Heterogeneous GNNs with Node Features

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak interpretability of heterogeneous graph neural networks (HGNNs) and the inability of existing post-hoc explanation methods to faithfully model realistic discrete node attributes (e.g., keywords, units) hinder transparent decision-making. To address this, we propose the first model-level explanation framework grounded in discrete denoising diffusion processes. Our method jointly generates semantically coherent subgraph structures and multi-typed discrete node features directly in the discrete space—bypassing restrictive one-hot encodings and continuous embeddings—while incorporating model gradients for optimization. Extensive experiments on multiple real-world heterogeneous graph datasets demonstrate that our explanations significantly outperform state-of-the-art baselines in both fidelity and plausibility. The resulting explanations enhance human-understandable and trustworthy reasoning behind HGNN predictions, advancing the interpretability of complex heterogeneous graph models.

Technology Category

Application Category

📝 Abstract
Many real-world datasets, such as citation networks, social networks, and molecular structures, are naturally represented as heterogeneous graphs, where nodes belong to different types and have additional features. For example, in a citation network, nodes representing "Paper" or "Author" may include attributes like keywords or affiliations. A critical machine learning task on these graphs is node classification, which is useful for applications such as fake news detection, corporate risk assessment, and molecular property prediction. Although Heterogeneous Graph Neural Networks (HGNNs) perform well in these contexts, their predictions remain opaque. Existing post-hoc explanation methods lack support for actual node features beyond one-hot encoding of node type and often fail to generate realistic, faithful explanations. To address these gaps, we propose DiGNNExplainer, a model-level explanation approach that synthesizes heterogeneous graphs with realistic node features via discrete denoising diffusion. In particular, we generate realistic discrete features (e.g., bag-of-words features) using diffusion models within a discrete space, whereas previous approaches are limited to continuous spaces. We evaluate our approach on multiple datasets and show that DiGNNExplainer produces explanations that are realistic and faithful to the model's decision-making, outperforming state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Explaining opaque predictions of Heterogeneous Graph Neural Networks
Generating realistic node features beyond one-hot encodings
Providing faithful model-level explanations for heterogeneous graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete diffusion synthesizes heterogeneous graphs
Generates realistic node features in discrete space
Model-level explanation for Heterogeneous GNN decisions
🔎 Similar Papers
No similar papers found.