From Latent to Lucid: Transforming Knowledge Graph Embeddings into Interpretable Structures

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge graph embedding (KGE) models suffer from an interpretability bottleneck due to the black-box nature of their high-dimensional latent representations. To address this, we propose the first post-hoc local explanation framework that requires no retraining: it directly decodes the KGE latent space by leveraging embedding smoothness to extract structured triples from neighborhood subgraphs. Our method generates three human-interpretable explanation types—symbolic rules, concrete instances, and analogical reasoning—grounded in local subgraph structure. Crucially, we systematically formalize embedding smoothness as an explanatory mechanism, integrating symbolic subgraph extraction with triplet-level statistical regularization. Extensive evaluation across multiple KGE models and benchmark datasets demonstrates high explanation fidelity, strong locality, and real-time scalability. The framework significantly enhances prediction trustworthiness and model debuggability without compromising performance.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce a post-hoc and local explainable AI method tailored for Knowledge Graph Embedding (KGE) models. These models are essential to Knowledge Graph Completion yet criticized for their opaque, black-box nature. Despite their significant success in capturing the semantics of knowledge graphs through high-dimensional latent representations, their inherent complexity poses substantial challenges to explainability. While existing methods like Kelpie use resource-intensive perturbation to explain KGE models, our approach directly decodes the latent representations encoded by KGE models, leveraging the smoothness of the embeddings, which follows the principle that similar embeddings reflect similar behaviours within the Knowledge Graph, meaning that nodes are similarly embedded because their graph neighbourhood looks similar. This principle is commonly referred to as smoothness. By identifying symbolic structures, in the form of triples, within the subgraph neighborhoods of similarly embedded entities, our method identifies the statistical regularities on which the models rely and translates these insights into human-understandable symbolic rules and facts. This bridges the gap between the abstract representations of KGE models and their predictive outputs, offering clear, interpretable insights. Key contributions include a novel post-hoc and local explainable AI method for KGE models that provides immediate, faithful explanations without retraining, facilitating real-time application on large-scale knowledge graphs. The method's flexibility enables the generation of rule-based, instance-based, and analogy-based explanations, meeting diverse user needs. Extensive evaluations show the effectiveness of our approach in delivering faithful and well-localized explanations, enhancing the transparency and trustworthiness of KGE models.
Problem

Research questions and friction points this paper is trying to address.

Transforming opaque Knowledge Graph Embeddings into interpretable structures
Decoding latent representations to generate human-understandable symbolic rules
Providing real-time explainable AI without retraining KGE models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decodes latent KGE representations into symbolic rules
Uses embedding smoothness principle for explanation generation
Provides real-time post-hoc explanations without model retraining
🔎 Similar Papers
No similar papers found.
C
C. Wehner
Sony AI Barcelona, Cognitive Systems Group, University of Bamberg
C
Chrysa Iliopoulou
Sony AI Barcelona
Tarek R. Besold
Tarek R. Besold
Sony AI
Artificial IntelligenceAI for ScienceTrustworthy AICognitive Systems