🤖 AI Summary
This work elucidates the theoretical mechanisms underlying the superior performance of Graph Transformers over conventional Graph Convolutional Networks in node-level prediction tasks, particularly their ability to mitigate oversmoothing. By analyzing the Neural Network Gaussian Process (NNGP) limit under infinite width and infinite attention heads, the authors derive inter-layer kernels for nodes and edges that characterize how node features and graph structure propagate through the attention mechanism. For the first time from a Gaussian process perspective, they formally demonstrate that Graph Transformers structurally preserve community information and maintain discriminative deep node representations. The proposed kernel design, which integrates positional encodings with informative priors, is empirically validated on both synthetic and real-world graph datasets, yielding significant performance gains in deep architectures.
📝 Abstract
Graph transformers are the state-of-the-art for learning from graph-structured data and are empirically known to avoid several pitfalls of message-passing architectures. However, there is limited theoretical analysis on why these models perform well in practice. In this work, we prove that attention-based architectures have structural benefits over graph convolutional networks in the context of node-level prediction tasks. Specifically, we study the neural network gaussian process limits of graph transformers (GAT, Graphormer, Specformer) with infinite width and infinite heads, and derive the node-level and edge-level kernels across the layers. Our results characterise how the node features and the graph structure propagate through the graph attention layers. As a specific example, we prove that graph transformers structurally preserve community information and maintain discriminative node representations even in deep layers, thereby preventing oversmoothing. We provide empirical evidence on synthetic and real-world graphs that validate our theoretical insights, such as integrating informative priors and positional encoding can improve performance of deep graph transformers.