🤖 AI Summary
Graph neural networks (GNNs) exhibit strong predictive performance but suffer from poor interpretability, limiting their deployment in high-stakes, trust-critical applications. To address this, we propose the Graph Kolmogorov–Arnold Network (GKAN), the first GNN architecture grounded in the Kolmogorov–Arnold representation theorem. GKAN introduces edge-level learnable spline activation functions, enabling *intrinsic interpretability*—eliminating reliance on post-hoc explanation methods. By embedding structured nonlinear modeling directly into the message-passing mechanism, GKAN simultaneously achieves high expressive power and mathematical traceability. Extensive experiments across node, link, and graph classification tasks demonstrate that GKAN consistently outperforms state-of-the-art baselines on five benchmark datasets. Crucially, GKAN provides intuitive, attribution-aware decision mechanisms—each prediction is decomposable into interpretable edge-wise contributions—thereby significantly enhancing model transparency, auditability, and reliability.
📝 Abstract
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability, making their application challenging in domains requiring transparent decision-making. We propose the Graph Kolmogorov-Arnold Network (GKAN), a novel GNN model leveraging spline-based activation functions on edges to enhance both accuracy and interpretability. Our experiments on five benchmark datasets demonstrate that GKAN outperforms state-of-the-art GNN models in node classification, link prediction, and graph classification tasks. In addition to the improved accuracy, GKAN's design inherently provides clear insights into the model's decision-making process, eliminating the need for post-hoc explainability techniques. This paper discusses the methodology, performance, and interpretability of GKAN, highlighting its potential for applications in domains where interpretability is crucial.