🤖 AI Summary
This study addresses the critical reliability issue of stability in global model interpretability for machine learning, particularly on tabular data. Method: We systematically evaluate the consistency of mainstream explanation methods under minor perturbations to both input data and underlying algorithms, across both supervised and unsupervised learning tasks. Contribution/Results: Our large-scale empirical analysis—first of its kind—reveals that prevailing explanation methods are generally unstable; stability is uncorrelated with predictive accuracy; explanations are often more fragile than model predictions themselves; and no single method consistently dominates across multiple benchmarks. We propose a novel reliability assessment paradigm prioritizing stability, underpinned by a quantitative perturbation-analysis framework. To enable reproducible, standardized stability evaluation, we open-source the IML Dashboard and a Python toolkit. These contributions advance explainable AI from mere “availability” toward genuine “trustworthiness.”
📝 Abstract
As machine learning systems are increasingly used in high-stakes domains, there is a growing emphasis placed on making them interpretable to improve trust in these systems. In response, a range of interpretable machine learning (IML) methods have been developed to generate human-understandable insights into otherwise black box models. With these methods, a fundamental question arises: Are these interpretations reliable? Unlike with prediction accuracy or other evaluation metrics for supervised models, the proximity to the true interpretation is difficult to define. Instead, we ask a closely related question that we argue is a prerequisite for reliability: Are these interpretations stable? We define stability as findings that are consistent or reliable under small random perturbations to the data or algorithms. In this study, we conduct the first systematic, large-scale empirical stability study on popular machine learning global interpretations for both supervised and unsupervised tasks on tabular data. Our findings reveal that popular interpretation methods are frequently unstable, notably less stable than the predictions themselves, and that there is no association between the accuracy of machine learning predictions and the stability of their associated interpretations. Moreover, we show that no single method consistently provides the most stable interpretations across a range of benchmark datasets. Overall, these results suggest that interpretability alone does not warrant trust, and underscores the need for rigorous evaluation of interpretation stability in future work. To support these principles, we have developed and released an open source IML dashboard and Python package to enable researchers to assess the stability and reliability of their own data-driven interpretations and discoveries.