π€ AI Summary
Existing knowledge graph benchmark datasets commonly lack complete ontological schema information, limiting their utility for evaluating algorithms that rely on semantic constraints or neuro-symbolic reasoning. To address this gap, this work proposes a workflow that jointly extracts both schema and factual triples from knowledge graphs to construct consistency-aware datasets. By leveraging the OWL ontology language and description logic-based reasoning mechanisms, the approach resolves inconsistencies and infers implicit knowledge. The project delivers the first systematically constructed, high-expressivity dataset that integrates a complete ontological schema with factual assertions, while also enriching existing benchmarks with schema information. All released resources support both logical reasoning services and tensor-based loading in mainstream machine learning frameworks, substantially enhancing the fidelity and comprehensiveness of algorithm evaluation.
π Abstract
Datasets for the experimental evaluation of knowledge graph refinement algorithms typically contain only ground facts, retaining very limited schema level knowledge even when such information is available in the source knowledge graphs. This limits the evaluation of methods that rely on rich ontological constraints, reasoning or neurosymbolic techniques and ultimately prevents assessing their performance in large-scale, real-world knowledge graphs. In this paper, we present \resource{} the first resource that provides a workflow for extracting datasets including both schema and ground facts, ready for machine learning and reasoning services, along with the resulting curated suite of datasets. The workflow also handles inconsistencies detected when keeping both schema and facts and also leverage reasoning for entailing implicit knowledge. The suite includes newly extracted datasets from KGs with expressive schemas while simultaneously enriching existing datasets with schema information. Each dataset is serialized in OWL making it ready for reasoning services. Moreover, we provide utilities for loading datasets in tensor representations typical of standard machine learning libraries.