Simplifying Graph Neural Kernels: from Stacking Layers to Collapsed Structure

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Graph Neural Tangent Kernels (GNTKs) bridge kernel methods and graph neural networks (GNNs), but their layer-wise stacking architecture incurs substantial redundant computation, resulting in high time complexity and poor scalability. Method: We propose the Simplified Graph Neural Kernel (SGTK/SGNK) framework, which replaces multi-layer stacking with continuous $K$-hop neighborhood aggregation, integrates a collapsed architecture design, and employs Gaussian process modeling to analytically compute activation expectations—bypassing iterative layer-wise propagation. Under the infinite-width graph network assumption, SGTK enables efficient high-order neighborhood modeling. Contribution/Results: Theoretically and empirically, SGTK achieves comparable accuracy to GNTK on both node and graph classification tasks, while significantly reducing time complexity. It markedly improves computational efficiency and scalability, enabling practical deployment on larger graphs without sacrificing representational power.

Technology Category

Application Category

📝 Abstract
The Graph Neural Tangent Kernel (GNTK) successfully bridges the gap between kernel methods and Graph Neural Networks (GNNs), addressing key challenges such as the difficulty of training deep networks and the limitations of traditional kernel methods. However, the existing layer-stacking strategy in GNTK introduces redundant computations, significantly increasing computational complexity and limiting scalability for practical applications. To address these issues, this paper proposes the Simplified Graph Neural Tangent Kernel (SGTK), which replaces the traditional multi-layer stacking mechanism with a continuous $K$-step aggregation operation. This novel approach streamlines the iterative kernel computation process, effectively eliminating redundant calculations while preserving the kernel's expressiveness. By reducing the dependency on layer stacking, SGTK achieves both computational simplicity and efficiency. Furthermore, we introduce the Simplified Graph Neural Kernel (SGNK), which models infinitely wide Graph Neural Networks as Gaussian Processes. This allows kernel values to be directly determined from the expected outputs of activation functions in the infinite-width regime, bypassing the need for explicit layer-by-layer computation. SGNK further reduces computational complexity while maintaining the capacity to capture intricate structural patterns in graphs. Extensive experiments on node and graph classification tasks demonstrate that the proposed SGTK and SGNK achieve performance comparable to existing approaches while improving computational efficiency. Implementation details are available at https://anonymous.4open.science/r/SGNK-1CE4/.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant computations in Graph Neural Tangent Kernel (GNTK)
Simplifies kernel computation via continuous K-step aggregation
Models infinitely wide GNNs as Gaussian Processes efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces multi-layer stacking with K-step aggregation
Models wide GNNs as Gaussian Processes
Reduces computation while preserving expressiveness
🔎 Similar Papers
No similar papers found.
L
Lin Wang
The Hong Kong Polytechnic University
S
Shijie Wang
The Hong Kong Polytechnic University
Sirui Huang
Sirui Huang
Hong Kong Polytechnic University, University of Technology Sydney
large language modelscausal inferencerecommendation
Q
Qing Li
The Hong Kong Polytechnic University