π€ AI Summary
Post-hoc explainability in knowledge graph completion (KGC) lacks a formal evaluation framework, hindering method reproducibility and cross-study comparison.
Method: This paper introduces the first multi-objective optimization formulation for post-hoc KGC explanation, proposing a unified, generic explanation model that jointly optimizes explanation effectiveness (e.g., MRR, Hits@k), conciseness, and relevance to user queries. A standardized evaluation protocol is established to enable quantitative, comparable, and reproducible assessment of explanation quality.
Contribution/Results: Extensive experiments demonstrate the frameworkβs broad applicability across diverse KGC models and benchmark datasets. It significantly enhances the theoretical rigor and empirical comparability of post-hoc explanation methods, establishing a new benchmark for explainable KGC. The framework bridges critical gaps between interpretability research and practical deployment, enabling systematic evaluation and advancement of transparent KGC systems.
π Abstract
Post-hoc explainability for Knowledge Graph Completion (KGC) lacks formalization and consistent evaluations, hindering reproducibility and cross-study comparisons. This paper argues for a unified approach to post-hoc explainability in KGC. First, we propose a general framework to characterize post-hoc explanations via multi-objective optimization, balancing their effectiveness and conciseness. This unifies existing post-hoc explainability algorithms in KGC and the explanations they produce. Next, we suggest and empirically support improved evaluation protocols using popular metrics like Mean Reciprocal Rank and Hits@$k$. Finally, we stress the importance of interpretability as the ability of explanations to address queries meaningful to end-users. By unifying methods and refining evaluation standards, this work aims to make research in KGC explainability more reproducible and impactful.