Explainable Graph Representation Learning via Graph Pattern Analysis

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the interpretability of graph representation learning: specifically, what structural and semantic information is encoded in learned graph representations? To this end, we propose PXGL-GNN—the first method to directly interpret graph embeddings at the representation level. Inspired by graph kernels, PXGL-GNN samples diverse subgraph patterns, jointly learns pattern-level representations with node features, and aggregates them via learnable weights to produce human-interpretable, attribution-aware graph embeddings. Theoretical analysis establishes its robustness and generalization guarantees. Extensive experiments on multiple real-world datasets demonstrate that PXGL-GNN significantly outperforms state-of-the-art baselines in both supervised and unsupervised tasks. Crucially, it provides intuitive, verifiable subgraph-pattern attributions—thereby advancing graph neural networks from opaque “black-box” models toward transparent, interpretable modeling.

Technology Category

Application Category

📝 Abstract
Explainable artificial intelligence (XAI) is an important area in the AI community, and interpretability is crucial for building robust and trustworthy AI models. While previous work has explored model-level and instance-level explainable graph learning, there has been limited investigation into explainable graph representation learning. In this paper, we focus on representation-level explainable graph learning and ask a fundamental question: What specific information about a graph is captured in graph representations? Our approach is inspired by graph kernels, which evaluate graph similarities by counting substructures within specific graph patterns. Although the pattern counting vector can serve as an explainable representation, it has limitations such as ignoring node features and being high-dimensional. To address these limitations, we introduce a framework (PXGL-GNN) for learning and explaining graph representations through graph pattern analysis. We start by sampling graph substructures of various patterns. Then, we learn the representations of these patterns and combine them using a weighted sum, where the weights indicate the importance of each graph pattern's contribution. We also provide theoretical analyses of our methods, including robustness and generalization. In our experiments, we show how to learn and explain graph representations for real-world data using pattern analysis. Additionally, we compare our method against multiple baselines in both supervised and unsupervised learning tasks to demonstrate its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Develops explainable graph representation learning via pattern analysis
Addresses limitations of pattern counting by incorporating node features
Provides theoretical robustness and generalization analyses for the method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning graph representations via pattern analysis
Weighted combination of sampled substructure patterns
Theoretical analysis of robustness and generalization
🔎 Similar Papers
No similar papers found.
X
Xudong Wang
School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China
Ziheng Sun
Ziheng Sun
CUHKSZ
C
Chris Ding
School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China
Jicong Fan
Jicong Fan
The Chinese University of Hong Kong, Shenzhen
Artificial IntelligenceMachine Learning