🤖 AI Summary
To address the longstanding trade-off between interpretability and accuracy in graph neural networks (GNNs) for graph classification, this paper proposes GDLNN: an interpretable graph classification framework that integrates a domain-specific programming language—Graph Description Language (GDL)—with neural networks. Its core innovation is a differentiable GDL layer that automatically generates graph representations exhibiting both high expressivity and semantic clarity, enabling native, efficient, and high-fidelity model explanations. By performing programmatic, rule-guided feature extraction, the GDL layer substantially reduces explanation overhead while enhancing model transparency. Evaluated on multiple standard graph classification benchmarks, GDLNN surpasses state-of-the-art GNNs in classification accuracy and simultaneously delivers high-quality, low-cost explanations. To our knowledge, GDLNN is the first approach to achieve joint optimization of strong predictive performance and strong interpretability in graph representation learning.
📝 Abstract
We present GDLNN, a new graph machine learning architecture, for graph classification tasks. GDLNN combines a domain-specific programming language, called GDL, with neural networks. The main strength of GDLNN lies in its GDL layer, which generates expressive and interpretable graph representations. Since the graph representation is interpretable, existing model explanation techniques can be directly applied to explain GDLNN's predictions. Our evaluation shows that the GDL-based representation achieves high accuracy on most graph classification benchmark datasets, outperforming dominant graph learning methods such as GNNs. Applying an existing model explanation technique also yields high-quality explanations of GDLNN's predictions. Furthermore, the cost of GDLNN is low when the explanation cost is included.