🤖 AI Summary
Standard message-passing neural networks (MPNNs) struggle to effectively capture higher-order topological structures in graphs. To address this, we propose LEAP—a learnable positional encoding framework grounded in the local Euler Characteristic Transform (ℓ-ECT). LEAP is the first method to integrate differentiable Euler characteristic transforms (DECT) with local topological awareness, enabling end-to-end trainable geometric-topological joint representations. By approximating the ℓ-ECT locally and embedding it directly into the message-passing mechanism, LEAP captures multi-scale topological features—such as cycles and cavities—without requiring precomputed graph augmentations or preprocessing. Extensive experiments on real-world graph benchmarks and synthetic topological tasks demonstrate that LEAP significantly enhances GNNs’ ability to recognize topological patterns. It achieves an average performance gain of 5.2% (12.7% relative improvement) on graph classification and regression tasks, establishing a novel paradigm for topology-aware graph representation learning.
📝 Abstract
Graph neural networks (GNNs) largely rely on the message-passing paradigm, where nodes iteratively aggregate information from their neighbors. Yet, standard message passing neural networks (MPNNs) face well-documented theoretical and practical limitations. Graph positional encoding (PE) has emerged as a promising direction to address these limitations. The Euler Characteristic Transform (ECT) is an efficiently computable geometric-topological invariant that characterizes shapes and graphs. In this work, we combine the differentiable approximation of the ECT (DECT) and its local variant ($ell$-ECT) to propose LEAP, a new end-to-end trainable local structural PE for graphs. We evaluate our approach on multiple real-world datasets as well as on a synthetic task designed to test its ability to extract topological features. Our results underline the potential of LEAP-based encodings as a powerful component for graph representation learning pipelines.