Formally Explaining Decision Tree Models with Answer Set Programming

๐Ÿ“… 2026-01-07
๐Ÿ›๏ธ Electronic Proceedings in Theoretical Computer Science
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited interpretability of complex decision tree modelsโ€”such as random forests and gradient-boosted treesโ€”in safety-critical applications, where formal justifications for predictions are essential. To overcome this challenge, the authors propose a novel approach based on Answer Set Programming (ASP) that automatically generates diverse logical explanations, including sufficient, contrastive, majority-based, and tree-specific justifications. Compared to SAT-based methods, ASP offers greater flexibility in encoding user preferences and enables the enumeration of all feasible explanations, thereby significantly enhancing both the expressiveness and completeness of the generated justifications. Empirical evaluation across multiple datasets demonstrates the effectiveness of the proposed method in producing varied, formally verifiable explanations, while a systematic analysis highlights its strengths and limitations.

Technology Category

Application Category

๐Ÿ“ Abstract
Decision tree models, including random forests and gradient-boosted decision trees, are widely used in machine learning due to their high predictive performance. However, their complex structures often make them difficult to interpret, especially in safety-critical applications where model decisions require formal justification. Recent work has demonstrated that logical and abductive explanations can be derived through automated reasoning techniques. In this paper, we propose a method for generating various types of explanations, namely, sufficient, contrastive, majority, and tree-specific explanations, using Answer Set Programming (ASP). Compared to SAT-based approaches, our ASP-based method offers greater flexibility in encoding user preferences and supports enumeration of all possible explanations. We empirically evaluate the approach on a diverse set of datasets and demonstrate its effectiveness and limitations compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

decision tree
model interpretability
formal explanation
safety-critical applications
explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Answer Set Programming
Explainable AI
Decision Trees
Formal Explanation
Abductive Reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Akihiro Takemura
National Institute of Informatics, Tokyo, Japan
M
Masayuki Otani
Tokyo Institute of Technology, Tokyo, Japan
Katsumi Inoue
Katsumi Inoue
National Institute of Informatics
Artificial IntelligenceAnswer Set ProgrammingAbductive ReasoningInductive Logic Programming