🤖 AI Summary
Standard graph neural networks (GNNs) exhibit limited performance on low-homophily graphs, yet homophily alone is insufficient to characterize the utility of graph structure for label prediction. This work proposes a unified theoretical framework that integrates Forman curvature–guided graph rewiring and Laplacian positional encoding through the lens of label informativeness (LI), yielding a geometry-aware GNN architecture named ASEHybrid. The study establishes, for the first time, necessary and sufficient conditions linking LI to GNN performance gains, revealing that curvature-based rewiring does not enhance expressivity but reshapes information flow. Theoretical guarantees on convergence and Lipschitz stability are provided. Experiments demonstrate that ASEHybrid significantly outperforms feature-only baselines on heterophilic yet label-informative datasets such as Chameleon and Squirrel, while offering marginal improvements in high-baseline settings, thereby validating both the theoretical predictions and the efficacy of the proposed approach.
📝 Abstract
Standard message-passing graph neural networks (GNNs) often struggle on graphs with low homophily, yet homophily alone does not explain this behavior, as graphs with similar homophily levels can exhibit markedly different performance and some heterophilous graphs remain easy for vanilla GCNs. Recent work suggests that label informativeness (LI), the mutual information between labels of adjacent nodes, provides a more faithful characterization of when graph structure is useful. In this work, we develop a unified theoretical framework that connects curvature-guided rewiring and positional geometry through the lens of label informativeness, and instantiate it in a practical geometry-aware architecture, ASEHybrid. Our analysis provides a necessary-and-sufficient characterization of when geometry-aware GNNs can improve over feature-only baselines: such gains are possible if and only if graph structure carries label-relevant information beyond node features. Theoretically, we relate adjusted homophily and label informativeness to the spectral behavior of label signals under Laplacian smoothing, show that degree-based Forman curvature does not increase expressivity beyond the one-dimensional Weisfeiler--Lehman test but instead reshapes information flow, and establish convergence and Lipschitz stability guarantees for a curvature-guided rewiring process. Empirically, we instantiate ASEHybrid using Forman curvature and Laplacian positional encodings and conduct controlled ablations on Chameleon, Squirrel, Texas, Tolokers, and Minesweeper, observing gains precisely on label-informative heterophilous benchmarks where graph structure provides label-relevant information beyond node features, and no meaningful improvement in high-baseline regimes.