Learning Laplacian Positional Encodings for Heterophilous Graphs

📅 2025-04-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph positional encodings (PEs) frequently fail—or even degrade GNN performance—on heterogeneous graphs (where neighboring nodes exhibit large label disparities), despite the ubiquity of heterogeneity in real-world networks. To address this, we propose Learnable Laplacian Positional Encoding (LLPE), the first PE framework theoretically and empirically tailored to heterogeneous graphs. LLPE leverages the full spectral decomposition of the graph Laplacian and introduces learnable frequency-domain filters to jointly model both homophilous and heterophilous structural patterns. Crucially, it supports arbitrary graph distance approximation, thereby breaking the fundamental reliance of conventional PEs on homophily assumptions. LLPE integrates seamlessly into both GNNs and Graph Transformers. Evaluated on 12 benchmark datasets, it yields substantial improvements: up to 35% accuracy gain on synthetic graphs and up to 14% on real-world graphs.

Technology Category

Application Category

📝 Abstract
In this work, we theoretically demonstrate that current graph positional encodings (PEs) are not beneficial and could potentially hurt performance in tasks involving heterophilous graphs, where nodes that are close tend to have different labels. This limitation is critical as many real-world networks exhibit heterophily, and even highly homophilous graphs can contain local regions of strong heterophily. To address this limitation, we propose Learnable Laplacian Positional Encodings (LLPE), a new PE that leverages the full spectrum of the graph Laplacian, enabling them to capture graph structure on both homophilous and heterophilous graphs. Theoretically, we prove LLPE's ability to approximate a general class of graph distances and demonstrate its generalization properties. Empirically, our evaluation on 12 benchmarks demonstrates that LLPE improves accuracy across a variety of GNNs, including graph transformers, by up to 35% and 14% on synthetic and real-world graphs, respectively. Going forward, our work represents a significant step towards developing PEs that effectively capture complex structures in heterophilous graphs.
Problem

Research questions and friction points this paper is trying to address.

Current positional encodings harm heterophilous graph performance
Proposing Learnable Laplacian Positional Encodings for heterophily adaptation
LLPE improves GNN accuracy on diverse graph types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Laplacian Positional Encodings for graphs
Utilizes full spectrum of graph Laplacian
Improves accuracy on heterophilous and homophilous graphs
🔎 Similar Papers
No similar papers found.