🤖 AI Summary
Traditional graph neural networks (GNNs) suffer from loss of fine-grained local structural details during neighborhood aggregation and struggle to simultaneously achieve high expressive power and global interpretability. To address this, we propose the local Euler Characteristic Transform (ℓ-ECT), the first localization of the Euler Characteristic Transform, which generates lossless, topology-aware, and rotation-invariant fine-grained representations for each node’s neighborhood. Leveraging ℓ-ECT, we design a local subgraph encoding scheme and a spatial alignment metric that jointly preserve discriminative capacity while enabling global interpretability. Experiments demonstrate that our method significantly outperforms state-of-the-art GNNs on node classification across diverse, highly heterogeneous graphs—achieving both superior accuracy and intrinsic interpretability. This work establishes a novel topologically guided paradigm for graph representation learning.
📝 Abstract
The Euler Characteristic Transform (ECT) is an efficiently-computable geometrical-topological invariant that characterizes the global shape of data. In this paper, we introduce the Local Euler Characteristic Transform ($ell$-ECT), a novel extension of the ECT particularly designed to enhance expressivity and interpretability in graph representation learning. Unlike traditional Graph Neural Networks (GNNs), which may lose critical local details through aggregation, the $ell$-ECT provides a lossless representation of local neighborhoods. This approach addresses key limitations in GNNs by preserving nuanced local structures while maintaining global interpretability. Moreover, we construct a rotation-invariant metric based on $ell$-ECTs for spatial alignment of data spaces. Our method exhibits superior performance than standard GNNs on a variety of node classification tasks, particularly in graphs with high heterophily.