Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals

📅 2025-05-19
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Existing XAI methods focus on explaining point predictions (i.e., predictive means) while neglecting the attribution of predictive uncertainty. This work proposes the first model-agnostic framework for uncertainty attribution, treating conformal prediction interval width and bounds as cooperative game value functions. It employs Harsanyi allocations—specifically, proportional Shapley values—to quantify each input feature’s contribution to prediction uncertainty. Unlike conventional mean-prediction attribution, our approach enables interval-level uncertainty interpretability for the first time. By integrating Monte Carlo approximation with statistical robustness analysis, it achieves significant computational efficiency gains. Experiments on synthetic and real-world datasets demonstrate that the method accurately identifies dominant uncertainty-driving features, thereby enhancing the trustworthiness and interpretability of machine learning decisions in high-stakes applications.

Technology Category

Application Category

📝 Abstract
Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.
Problem

Research questions and friction points this paper is trying to address.

Attributing predictive uncertainty to input features
Extending Shapley values for uncertainty interpretation
Providing a model-agnostic uncertainty attribution method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic uncertainty attribution using conformal prediction
Proportional Shapley values for feature importance distribution
Monte Carlo approximation for computational efficiency
🔎 Similar Papers
No similar papers found.
M
Marouane Il Idrissi
Département de Mathématiques, Université du Québec à Montréal, Montréal, QC Canada; Institut Intelligence et Données, Université Laval, Québec, QC Canada
Agathe Fernandes Machado
Agathe Fernandes Machado
Université du Québec à Montréal
E
E. Gallic
CNRS - UniversitĂ© de MontrĂ©al CRM – CNRS; Aix Marseille Univ, CNRS, AMSE, Marseille, France
Arthur Charpentier
Arthur Charpentier
Université du Québec à Montréal
Riskinsurancepredictive modelingcomputational statisticsactuarial science