GRainsaCK: a Comprehensive Software Library for Benchmarking Explanations of Link Prediction Tasks on Knowledge Graphs

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standardized evaluation of explanation methods for knowledge graph link prediction remains challenging due to the absence of unified benchmarks. This paper introduces the first standardized evaluation framework specifically designed for link prediction explanations, accompanied by KGExplain—an end-to-end open-source library that automates model training, explanation generation, and multidimensional evaluation. Leveraging modular and functional design, KGExplain decouples embedding models from explanation algorithms, enabling plug-and-play substitution. It provides comprehensive evaluation metrics covering diversity, faithfulness, and readability, along with curated benchmark datasets. This work fills a critical gap in the field’s evaluation infrastructure, substantially improving reproducibility, comparability, and tool reusability. The library is publicly released with full documentation and tutorials.

Technology Category

Application Category

📝 Abstract
Since Knowledge Graphs are often incomplete, link prediction methods are adopted for predicting missing facts. Scalable embedding based solutions are mostly adopted for this purpose, however, they lack comprehensibility, which may be crucial in several domains. Explanation methods tackle this issue by identifying supporting knowledge explaining the predicted facts. Regretfully, evaluating/comparing quantitatively the resulting explanations is challenging as there is no standard evaluation protocol and overall benchmarking resource. We fill this important gap by proposing GRainsaCK, a reusable software resource that fully streamlines all the tasks involved in benchmarking explanations, i.e., from model training to evaluation of explanations along the same evaluation protocol. Moreover, GRainsaCK furthers modularity/extensibility by implementing the main components as functions that can be easily replaced. Finally, fostering its reuse, we provide extensive documentation including a tutorial.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized evaluation for knowledge graph explanation methods
No comprehensive benchmarking resource for explanation comparisons
Challenges in quantitatively assessing link prediction explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Software library for benchmarking link prediction explanations
Streamlines model training to evaluation protocol
Modular functions for easy replacement and extensibility
🔎 Similar Papers
No similar papers found.