🤖 AI Summary
This work addresses the lack of systematic comparison between hyperbolic and Euclidean spaces for multi-hop reasoning tasks. We propose an encoder-decoder framework based on learnable-curvature hyperbolic embeddings. Methodologically, curvature is initialized using the dataset’s δ-hyperbolicity and optimized end-to-end; rigorous ablation studies and controlled-variable experiments isolate the impact of geometric structure. Our key contributions are (i) treating curvature as a learnable parameter and (ii) introducing a geometry-aware initialization strategy grounded in empirical δ-hyperbolicity. Experiments across multiple multi-hop question answering benchmarks demonstrate that hyperbolic space consistently outperforms Euclidean space on datasets with pronounced hierarchical structure—particularly improving long-range dependency modeling and path reasoning accuracy. These results empirically validate the intrinsic suitability of hyperbolic geometry for representing hierarchical relational structures, such as those found in knowledge graphs.
📝 Abstract
Hyperbolic representations are effective in modeling knowledge graph data which is prevalently used to facilitate multi-hop reasoning. However, a rigorous and detailed comparison of the two spaces for this task is lacking. In this paper, through a simple integration of hyperbolic representations with an encoder-decoder model, we perform a controlled and comprehensive set of experiments to compare the capacity of hyperbolic space versus Euclidean space in multi-hop reasoning. Our results show that the former consistently outperforms the latter across a diverse set of datasets. In addition, through an ablation study, we show that a learnable curvature initialized with the delta hyperbolicity of the utilized data yields superior results to random initializations. Furthermore, our findings suggest that hyperbolic representations can be significantly more advantageous when the datasets exhibit a more hierarchical structure.