🤖 AI Summary
Message-passing neural networks (MPNNs) and graph Transformers exhibit limited capacity for modeling long-range dependencies, while ChebNet suffers from training instability due to its polynomial spectral approximation. Method: This paper re-examines ChebNet and uncovers its intrinsic potential for long-range modeling, proposing a novel, stable training framework grounded in dynamical systems theory. Specifically, it formulates ChebNet as a non-dissipative stable differential equation system and introduces a controllable spectral propagation mechanism—requiring no eigendecomposition, positional encoding, or graph rewiring—while leveraging Chebyshev polynomials for implicit high-order neighborhood aggregation. Contribution/Results: The method achieves near-state-of-the-art performance on multiple long-range graph benchmarks, significantly outperforming classical MPNNs and graph Transformers. Crucially, it maintains scalability and computational efficiency even under high-order polynomial approximations.
📝 Abstract
ChebNet, one of the earliest spectral GNNs, has largely been overshadowed by Message Passing Neural Networks (MPNNs), which gained popularity for their simplicity and effectiveness in capturing local graph structure. Despite their success, MPNNs are limited in their ability to capture long-range dependencies between nodes. This has led researchers to adapt MPNNs through rewiring or make use of Graph Transformers, which compromises the computational efficiency that characterized early spatial message-passing architectures, and typically disregards the graph structure. Almost a decade after its original introduction, we revisit ChebNet to shed light on its ability to model distant node interactions. We find that out-of-box, ChebNet already shows competitive advantages relative to classical MPNNs and GTs on long-range benchmarks, while maintaining good scalability properties for high-order polynomials. However, we uncover that this polynomial expansion leads ChebNet to an unstable regime during training. To address this limitation, we cast ChebNet as a stable and non-dissipative dynamical system, which we coin Stable-ChebNet. Our Stable-ChebNet model allows for stable information propagation, and has controllable dynamics which do not require the use of eigendecompositions, positional encodings, or graph rewiring. Across several benchmarks, Stable-ChebNet achieves near state-of-the-art performance.