๐ค AI Summary
This work addresses the limited interpretability of complex decision tree modelsโsuch as random forests and gradient-boosted treesโin safety-critical applications, where formal justifications for predictions are essential. To overcome this challenge, the authors propose a novel approach based on Answer Set Programming (ASP) that automatically generates diverse logical explanations, including sufficient, contrastive, majority-based, and tree-specific justifications. Compared to SAT-based methods, ASP offers greater flexibility in encoding user preferences and enables the enumeration of all feasible explanations, thereby significantly enhancing both the expressiveness and completeness of the generated justifications. Empirical evaluation across multiple datasets demonstrates the effectiveness of the proposed method in producing varied, formally verifiable explanations, while a systematic analysis highlights its strengths and limitations.
๐ Abstract
Decision tree models, including random forests and gradient-boosted decision trees, are widely used in machine learning due to their high predictive performance. However, their complex structures often make them difficult to interpret, especially in safety-critical applications where model decisions require formal justification. Recent work has demonstrated that logical and abductive explanations can be derived through automated reasoning techniques. In this paper, we propose a method for generating various types of explanations, namely, sufficient, contrastive, majority, and tree-specific explanations, using Answer Set Programming (ASP). Compared to SAT-based approaches, our ASP-based method offers greater flexibility in encoding user preferences and supports enumeration of all possible explanations. We empirically evaluate the approach on a diverse set of datasets and demonstrate its effectiveness and limitations compared to existing methods.