đ€ AI Summary
Existing XAI methods focus on explaining point predictions (i.e., predictive means) while neglecting the attribution of predictive uncertainty. This work proposes the first model-agnostic framework for uncertainty attribution, treating conformal prediction interval width and bounds as cooperative game value functions. It employs Harsanyi allocationsâspecifically, proportional Shapley valuesâto quantify each input featureâs contribution to prediction uncertainty. Unlike conventional mean-prediction attribution, our approach enables interval-level uncertainty interpretability for the first time. By integrating Monte Carlo approximation with statistical robustness analysis, it achieves significant computational efficiency gains. Experiments on synthetic and real-world datasets demonstrate that the method accurately identifies dominant uncertainty-driving features, thereby enhancing the trustworthiness and interpretability of machine learning decisions in high-stakes applications.
đ Abstract
Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.