🤖 AI Summary
Existing knowledge graph multi-hop reasoning methods employ geometric embeddings but rely on neural networks to learn logical operations (e.g., intersection, projection), undermining geometric interpretability. Method: We propose the first fully geometric reasoning paradigm: entities, relations, and logical operations are uniformly modeled as analytically defined geometric regions (e.g., spherical/hyperbolic caps) and explicit transformations (rotation, scaling, projection) in spherical or hyperbolic space—enabling purely parametric, neural-free implementation of logical operations. We further introduce a novel transitivity loss grounded in the ternary transitive rule $r(a,b) land r(b,c) o r(a,c)$. Contribution/Results: Our approach achieves state-of-the-art performance on standard benchmarks while enabling end-to-end geometric interpretability and visualizable, verifiable reasoning paths—bridging geometric representation and symbolic logic in knowledge graph reasoning.
📝 Abstract
Geometric embedding methods have shown to be useful for multi-hop reasoning on knowledge graphs by mapping entities and logical operations to geometric regions and geometric transformations, respectively. Geometric embeddings provide direct interpretability framework for queries. However, current methods have only leveraged the geometric construction of entities, failing to map logical operations to geometric transformations and, instead, using neural components to learn these operations. We introduce GeometrE, a geometric embedding method for multi-hop reasoning, which does not require learning the logical operations and enables full geometric interpretability. Additionally, unlike previous methods, we introduce a transitive loss function and show that it can preserve the logical rule $forall a,b,c: r(a,b) land r(b,c) o r(a,c)$. Our experiments show that GeometrE outperforms current state-of-the-art methods on standard benchmark datasets.