Positional Encoding meets Persistent Homology on Graphs

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Message-passing GNNs suffer from inherent local inductive biases, limiting their ability to capture global graph structural properties (e.g., connectivity, cycles). While positional encodings (PE) and persistent homology (PH) show promise for encoding global structure, their expressive power boundaries remain uncharacterized rigorously. This work first proves that PE and PH are incomparable—neither dominates the other—and constructs explicit counterexamples where each fails while the other succeeds. We propose PiPE, a learnable, topology-aware positional encoding framework, and theoretically establish that PiPE strictly subsumes both PE and PH in expressive power. PiPE integrates multi-scale structural modeling, efficient PH computation, and differentiable topological feature embedding. Extensive experiments demonstrate significant improvements over state-of-the-art methods on molecular property prediction, graph classification, and out-of-distribution generalization. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The local inductive bias of message-passing graph neural networks (GNNs) hampers their ability to exploit key structural information (e.g., connectivity and cycles). Positional encoding (PE) and Persistent Homology (PH) have emerged as two promising approaches to mitigate this issue. PE schemes endow GNNs with location-aware features, while PH methods enhance GNNs with multiresolution topological features. However, a rigorous theoretical characterization of the relative merits and shortcomings of PE and PH has remained elusive. We bridge this gap by establishing that neither paradigm is more expressive than the other, providing novel constructions where one approach fails but the other succeeds. Our insights inform the design of a novel learnable method, PiPE (Persistence-informed Positional Encoding), which is provably more expressive than both PH and PE. PiPE demonstrates strong performance across a variety of tasks (e.g., molecule property prediction, graph classification, and out-of-distribution generalization), thereby advancing the frontiers of graph representation learning. Code is available at https://github.com/Aalto-QuML/PIPE.
Problem

Research questions and friction points this paper is trying to address.

Enhancing GNNs with structural information via PE and PH
Theoretical comparison of PE and PH expressiveness in GNNs
Designing PiPE to outperform PE and PH in graph tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Positional Encoding with Persistent Homology
Introduces PiPE for enhanced expressiveness
Improves graph representation learning performance
🔎 Similar Papers
No similar papers found.