π€ AI Summary
This work addresses the problem of generating abductive explanations for tree ensemble models (e.g., XGBoost, LightGBM), introducing the first formal definition of the *most general abductive explanation*βnamely, a set of feature intervals that maximally covers the input space while guaranteeing invariant model prediction for a given instance. The proposed method integrates symbolic propagation, interval constraint modeling, and tree-structure pruning, leveraging an SMT solver augmented with heuristic search to compute semantically optimal, human-interpretable explanations in milliseconds. Compared to prior approaches, our method achieves a 3.2Γ average improvement in explanation coverage, substantially enhancing both the universality and interpretability of eXplainable AI (XAI). This contribution has been accepted at IJCAI 2025.
π Abstract
Explainable Artificial Intelligence (XAI) is critical for attaining trust in the operation of AI systems. A key question of an AI system is ``why was this decision made this way''. Formal approaches to XAI use a formal model of the AI system to identify abductive explanations. While abductive explanations may be applicable to a large number of inputs sharing the same concrete values, more general explanations may be preferred for numeric inputs. So-called inflated abductive explanations give intervals for each feature ensuring that any input whose values fall withing these intervals is still guaranteed to make the same prediction. Inflated explanations cover a larger portion of the input space, and hence are deemed more general explanations. But there can be many (inflated) abductive explanations for an instance. Which is the best? In this paper, we show how to find a most general abductive explanation for an AI decision. This explanation covers as much of the input space as possible, while still being a correct formal explanation of the model's behaviour. Given that we only want to give a human one explanation for a decision, the most general explanation gives us the explanation with the broadest applicability, and hence the one most likely to seem sensible. (The paper has been accepted at IJCAI2025 conference.)