🤖 AI Summary
To address the limited interpretability of black-box models—hindering their deployment in high-stakes domains—this paper proposes a supervised clustering framework grounded in SHAP values, the first to group instances by predictive rationale rather than input similarity, thereby uncovering heterogeneous attribution pathways leading to identical predictions. Methodologically, SHAP feature contributions serve as clustering features; a multi-class classifier is integrated with hierarchical clustering, and a novel multi-class waterfall plot is introduced to enable both fine-grained single-instance attribution and cross-instance comparative visualization of attribution paths. Evaluated on synthetic data and real-world ADNI Alzheimer’s disease data, the approach robustly identifies cohorts exhibiting homogeneous predictive logic, substantially enhancing model decision transparency and clinical interpretability. Key innovations include (1) a predictive-rationale-driven clustering paradigm and (2) a scalable, multi-class waterfall plot for attribution analysis.
📝 Abstract
In this growing age of data and technology, large black-box models are becoming the norm due to their ability to handle vast amounts of data and learn incredibly complex input-output relationships. The deficiency of these methods, however, is their inability to explain the prediction process, making them untrustworthy and their use precarious in high-stakes situations. SHapley Additive exPlanations (SHAP) analysis is an explainable AI method growing in popularity for its ability to explain model predictions in terms of the original features. For each sample and feature in the data set, we associate a SHAP value that quantifies the contribution of that feature to the prediction of that sample. Clustering these SHAP values can provide insight into the data by grouping samples that not only received the same prediction, but received the same prediction for similar reasons. In doing so, we map the various pathways through which distinct samples arrive at the same prediction. To showcase this methodology, we present a simulated experiment in addition to a case study in Alzheimer's disease using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We also present a novel generalization of the waterfall plot for multi-classification.