🤖 AI Summary
This study investigates the effectiveness boundaries of hyperbolic graph neural networks (HGNNs), demonstrating that their advantages emerge only when the learning task aligns with the intrinsic hyperbolic geometry of the input graph. To formalize this insight, the authors propose a “geometry–task alignment” principle, which extends HGNN performance evaluation beyond structural properties of graphs to encompass the consistency between task objectives and geometric assumptions. Through synthetic regression, link prediction, and node classification tasks—combined with embedding distortion analysis and comparative model evaluations—the work systematically validates this principle: HGNNs significantly outperform Euclidean counterparts in aligned tasks such as link prediction, yet lose their advantage in misaligned scenarios. This research provides both theoretical grounding and practical guidance for the informed application of hyperbolic graph neural networks.
📝 Abstract
Many complex networks exhibit hyperbolic structural properties, making hyperbolic space a natural candidate for representing hierarchical and tree-like graphs with low distortion. Based on this observation, Hyperbolic Graph Neural Networks (HGNNs) have been widely adopted as a principled choice for representation learning on tree-like graphs. In this work, we question this paradigm by proposing an additional condition of geometry-task alignment, i.e., whether the metric structure of the target follows that of the input graph. We theoretically and empirically demonstrate the capability of HGNNs to recover low-distortion representations on two synthetic regression problems, and show that their geometric inductive bias becomes helpful when the problem requires preserving metric structure. Additionally, we evaluate HGNNs on the tasks of link prediction and node classification by jointly analyzing predictive performance and embedding distortion, revealing that only link prediction is geometry-aligned. Overall, our findings shift the focus from only asking"Is the graph hyperbolic?"to also questioning"Is the task aligned with hyperbolic geometry?", showing that HGNNs consistently outperform Euclidean models under such alignment, while their advantage vanishes otherwise.