On The Expressive Power of GNN Derivatives

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) suffer from limited expressive power, particularly in distinguishing non-isomorphic graphs. Method: This paper proposes Higher-Order Derivative GNNs (HOD-GNNs), the first framework to explicitly leverage higher-order derivatives of message-passing GNNs with respect to node features as structure-aware embeddings—thereby encoding subgraph patterns directly. We design a sparse-graph-adapted algorithm for efficient higher-order derivative computation and integrate derivative-based embeddings into a two-stage, end-to-end trainable GNN architecture. Contribution/Results: We theoretically prove that HOD-GNNs achieve discriminative power equivalent to the Weisfeiler–Lehman (WL) test and uncover their intrinsic unification with subgraph GNNs and structural encoding paradigms. Extensive experiments on standard graph learning benchmarks demonstrate significant performance gains, validating the effectiveness, generalizability, and scalability of the derivative-enhanced representation paradigm.

Technology Category

Application Category

📝 Abstract
Despite significant advances in Graph Neural Networks (GNNs), their limited expressivity remains a fundamental challenge. Research on GNN expressivity has produced many expressive architectures, leading to architecture hierarchies with models of increasing expressive power. Separately, derivatives of GNNs with respect to node features have been widely studied in the context of the oversquashing and over-smoothing phenomena, GNN explainability, and more. To date, these derivatives remain unexplored as a means to enhance GNN expressivity. In this paper, we show that these derivatives provide a natural way to enhance the expressivity of GNNs. We introduce High-Order Derivative GNN (HOD-GNN), a novel method that enhances the expressivity of Message Passing Neural Networks (MPNNs) by leveraging high-order node derivatives of the base model. These derivatives generate expressive structure-aware node embeddings processed by a second GNN in an end-to-end trainable architecture. Theoretically, we show that the resulting architecture family's expressive power aligns with the WL hierarchy. We also draw deep connections between HOD-GNN, Subgraph GNNs, and popular structural encoding schemes. For computational efficiency, we develop a message-passing algorithm for computing high-order derivatives of MPNNs that exploits graph sparsity and parallelism. Evaluations on popular graph learning benchmarks demonstrate HOD-GNN's strong performance on popular graph learning tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing GNN expressivity using high-order node derivatives
Overcoming limited expressivity in Message Passing Neural Networks
Developing efficient derivative computation via message-passing algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging high-order derivatives to enhance GNN expressivity
End-to-end trainable architecture with structure-aware embeddings
Efficient message-passing algorithm exploiting graph sparsity and parallelism
🔎 Similar Papers
No similar papers found.