Can You Tell the Difference? Contrastive Explanations for ABox Entailments

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of contrastive explanation in ABox reasoning for description logic knowledge bases—specifically, explaining why an individual *a* is an instance of concept *C* while *b* is not. We propose the first unified formal framework that simultaneously handles positive entailment (*C*(*a*)) and negative (non-)entailment (¬*C*(*b*)). Our method employs lightweight yet expressive description logics, defines multi-criteria optimal contrastive explanations, analyzes their computational complexity, and devises a scalable, implemented generation algorithm. Experiments on real-world knowledge bases demonstrate the approach’s effectiveness and practical feasibility. Key contributions include: (i) the first formalization of contrastive ABox explanation; (ii) a unified explanatory mechanism for both positive and negative instances; (iii) a principled optimization criterion system grounded in logical minimality, relevance, and comprehensibility; and (iv) a theoretically sound and empirically validated solution that bridges formal semantics with scalable implementation.

Technology Category

Application Category

📝 Abstract
We introduce the notion of contrastive ABox explanations to answer questions of the type"Why is a an instance of C, but b is not?". While there are various approaches for explaining positive entailments (why is C(a) entailed by the knowledge base) as well as missing entailments (why is C(b) not entailed) in isolation, contrastive explanations consider both at the same time, which allows them to focus on the relevant commonalities and differences between a and b. We develop an appropriate notion of contrastive explanations for the special case of ABox reasoning with description logic ontologies, and analyze the computational complexity for different variants under different optimality criteria, considering lightweight as well as more expressive description logics. We implemented a first method for computing one variant of contrastive explanations, and evaluated it on generated problems for realistic knowledge bases.
Problem

Research questions and friction points this paper is trying to address.

Explaining why one individual belongs to a class while another does not
Developing contrastive ABox explanations for description logic ontologies
Analyzing computational complexity and implementing explanation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive ABox explanations compare instance entailments and non-entailments
Focuses on relevant commonalities and differences between instances
Computes explanations using description logic ontologies with complexity analysis
🔎 Similar Papers
No similar papers found.
Patrick Koopmann
Patrick Koopmann
Vrije Universiteit Amsterdam
Y
Yasir Mahmood
Data Science Group, Heinz Nixdorf Institute, Paderborn University, Germany
Axel-Cyrille Ngonga Ngomo
Axel-Cyrille Ngonga Ngomo
Professor of Data Science at Paderborn University, Heinz Nixdorf Institute
Knowledge GraphsKnowledge EngineeringSemantic WebMachine Learning
B
Balram Tiwari
Data Science Group, Heinz Nixdorf Institute, Paderborn University, Germany