🤖 AI Summary
Existing Graph Neural Tangent Kernels (GNTKs) bridge kernel methods and graph neural networks (GNNs), but their layer-wise stacking architecture incurs substantial redundant computation, resulting in high time complexity and poor scalability.
Method: We propose the Simplified Graph Neural Kernel (SGTK/SGNK) framework, which replaces multi-layer stacking with continuous $K$-hop neighborhood aggregation, integrates a collapsed architecture design, and employs Gaussian process modeling to analytically compute activation expectations—bypassing iterative layer-wise propagation. Under the infinite-width graph network assumption, SGTK enables efficient high-order neighborhood modeling.
Contribution/Results: Theoretically and empirically, SGTK achieves comparable accuracy to GNTK on both node and graph classification tasks, while significantly reducing time complexity. It markedly improves computational efficiency and scalability, enabling practical deployment on larger graphs without sacrificing representational power.
📝 Abstract
The Graph Neural Tangent Kernel (GNTK) successfully bridges the gap between kernel methods and Graph Neural Networks (GNNs), addressing key challenges such as the difficulty of training deep networks and the limitations of traditional kernel methods. However, the existing layer-stacking strategy in GNTK introduces redundant computations, significantly increasing computational complexity and limiting scalability for practical applications. To address these issues, this paper proposes the Simplified Graph Neural Tangent Kernel (SGTK), which replaces the traditional multi-layer stacking mechanism with a continuous $K$-step aggregation operation. This novel approach streamlines the iterative kernel computation process, effectively eliminating redundant calculations while preserving the kernel's expressiveness. By reducing the dependency on layer stacking, SGTK achieves both computational simplicity and efficiency. Furthermore, we introduce the Simplified Graph Neural Kernel (SGNK), which models infinitely wide Graph Neural Networks as Gaussian Processes. This allows kernel values to be directly determined from the expected outputs of activation functions in the infinite-width regime, bypassing the need for explicit layer-by-layer computation. SGNK further reduces computational complexity while maintaining the capacity to capture intricate structural patterns in graphs. Extensive experiments on node and graph classification tasks demonstrate that the proposed SGTK and SGNK achieve performance comparable to existing approaches while improving computational efficiency. Implementation details are available at https://anonymous.4open.science/r/SGNK-1CE4/.