🤖 AI Summary
SHAP ignores causal structure, while do-SHAP relies on a pre-specified estimand, limiting its practical applicability. Method: We propose an estimand-agnostic causal SHAP framework that dispenses with assumptions about predefined causal parameters and unifies the modeling of arbitrary identifiable interventional queries. Our approach integrates interventional SHAP, causal graph models, and approximation-based optimization algorithms to jointly estimate diverse causal effects—including total, direct, and indirect effects—within a single model, while also enabling interpretation of latent-variable-driven data-generating processes. Contribution/Results: Evaluated on two real-world datasets, our method achieves up to 3.2× speedup in computation while maintaining high-fidelity causal attribution. It advances do-SHAP toward a general-purpose, practical, and scalable framework for causal explainable AI.
📝 Abstract
Among explainability techniques, SHAP stands out as one of the most popular, but often overlooks the causal structure of the problem. In response, do-SHAP employs interventional queries, but its reliance on estimands hinders its practical application. To address this problem, we propose the use of estimand-agnostic approaches, which allow for the estimation of any identifiable query from a single model, making do-SHAP feasible on complex graphs. We also develop a novel algorithm to significantly accelerate its computation at a negligible cost, as well as a method to explain inaccessible Data Generating Processes. We demonstrate the estimation and computational performance of our approach, and validate it on two real-world datasets, highlighting its potential in obtaining reliable explanations.