π€ AI Summary
Traditional graph neural networks are constrained by the expressive power of the 1-WL test and lack fine-grained interpretability, limiting their applicability in high-stakes scenarios demanding trustworthy AI. To address this, this work proposes SymGraphβa symbolic graph learning framework that abandons continuous message passing in favor of discrete structural hashing and topological role aggregation to construct symbolic graph representations. This approach transcends the 1-WL expressivity barrier without requiring differentiable optimization. SymGraph achieves state-of-the-art performance among self-explainable GNNs across multiple benchmarks, accelerates CPU-based training by 10β100Γ, and generates semantically precise, interpretable rules with potential for scientific discovery.
π Abstract
Graph Neural Networks (GNNs) have become essential in high-stakes domains such as drug discovery, yet their black-box nature remains a significant barrier to trustworthiness. While self-explainable GNNs attempt to bridge this gap, they often rely on standard message-passing backbones that inherit fundamental limitations, including the 1-Weisfeiler-Lehman (1-WL) expressivity barrier and a lack of fine-grained interpretability. To address these challenges, we propose SymGraph, a symbolic framework designed to transcend these constraints. By replacing continuous message passing with discrete structural hashing and topological role-based aggregation, our architecture theoretically surpasses the 1-WL barrier, achieving superior expressiveness without the overhead of differentiable optimization. Extensive empirical evaluations demonstrate that SymGraph achieves state-of-the-art performance, outperforming existing self-explainable GNNs. Notably, SymGraph delivers 10x to 100x speedups in training time using only CPU execution. Furthermore, SymGraph generates rules with superior semantic granularity compared to existing rule-based methods, offering great potential for scientific discovery and explainable AI.