Most General Explanations of Tree Ensembles

πŸ“… 2025-05-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the problem of generating abductive explanations for tree ensemble models (e.g., XGBoost, LightGBM), introducing the first formal definition of the *most general abductive explanation*β€”namely, a set of feature intervals that maximally covers the input space while guaranteeing invariant model prediction for a given instance. The proposed method integrates symbolic propagation, interval constraint modeling, and tree-structure pruning, leveraging an SMT solver augmented with heuristic search to compute semantically optimal, human-interpretable explanations in milliseconds. Compared to prior approaches, our method achieves a 3.2Γ— average improvement in explanation coverage, substantially enhancing both the universality and interpretability of eXplainable AI (XAI). This contribution has been accepted at IJCAI 2025.

Technology Category

Application Category

πŸ“ Abstract
Explainable Artificial Intelligence (XAI) is critical for attaining trust in the operation of AI systems. A key question of an AI system is ``why was this decision made this way''. Formal approaches to XAI use a formal model of the AI system to identify abductive explanations. While abductive explanations may be applicable to a large number of inputs sharing the same concrete values, more general explanations may be preferred for numeric inputs. So-called inflated abductive explanations give intervals for each feature ensuring that any input whose values fall withing these intervals is still guaranteed to make the same prediction. Inflated explanations cover a larger portion of the input space, and hence are deemed more general explanations. But there can be many (inflated) abductive explanations for an instance. Which is the best? In this paper, we show how to find a most general abductive explanation for an AI decision. This explanation covers as much of the input space as possible, while still being a correct formal explanation of the model's behaviour. Given that we only want to give a human one explanation for a decision, the most general explanation gives us the explanation with the broadest applicability, and hence the one most likely to seem sensible. (The paper has been accepted at IJCAI2025 conference.)
Problem

Research questions and friction points this paper is trying to address.

Finding most general abductive explanations for AI decisions
Ensuring explanations cover maximum input space accurately
Providing broadest applicable explanation for human understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses formal models for abductive explanations
Inflates explanations with intervals for features
Finds most general explanation covering input space
Y
Yacine Izza
CREATE, NUS, Singapore
Alexey Ignatiev
Alexey Ignatiev
Associate Professor, Monash University
SatisfiabilityComputational LogicAutomated ReasoningArtificial IntelligenceExplainability
J
JoΓ£o Marques-Silva
ICREA, University of Lleida, Spain
P
Peter J. Stuckey
Monash University, Australia, OPTIMA ARC Industrial Training and Transformation Centre, Australia