🤖 AI Summary
This work addresses abductive reasoning over ABoxes in inconsistent or erroneous knowledge bases (KBs), aiming to generate minimal, semantically plausible hypothesis extensions that restore entailment of otherwise non-derivable conclusions. To overcome the failure of classical abduction under imperfect data, we introduce a novel ABox abduction framework grounded in KB repair semantics, and propose a dual optimization criterion combining model distance and instance minimality to enhance hypothesis utility and interpretability. We formalize distinct repair semantics—including AR, BR, and IAR—within DL-Lite and ℰℒ⊥ description logics, and systematically analyze the existence checking and verification problems for abductive solutions. For the first time, we establish tight complexity bounds for abduction solving and verification across these semantics, including Σ₂^P-completeness and DP-completeness. Our results provide both theoretical foundations and algorithmic support for trustworthy abductive reasoning over inconsistent KBs.
📝 Abstract
Abduction is the task of computing a sufficient extension of a knowledge base (KB) that entails a conclusion not entailed by the original KB. It serves to compute explanations, or hypotheses, for such missing entailments. While this task has been intensively investigated for perfect data and under classical semantics, less is known about abduction when erroneous data results in inconsistent KBs. In this paper we define a suitable notion of abduction under repair semantics, and propose a set of minimality criteria that guides abduction towards `useful' hypotheses. We provide initial complexity results on deciding existence of and verifying abductive solutions with these criteria, under different repair semantics and for the description logics DL-Lite and EL_bot.