VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature attribution methods (e.g., KernelSHAP, LIME) rely on global data distributions, leading to inaccurate characterization of local model behavior and distorted explanations. To address this, we propose VARSHAP—a model-agnostic local feature attribution method that introduces prediction variance reduction as the core Shapley value metric, the first such formulation. VARSHAP rigorously satisfies the efficiency, symmetry, and additivity axioms of Shapley values. It estimates conditional variances via Monte Carlo sampling, eliminating the need for surrogate models or distributional assumptions, and inherently exhibits robustness to data distribution shifts. Experiments on synthetic and real-world datasets demonstrate that VARSHAP improves attribution accuracy by 12–23% over KernelSHAP and LIME. Qualitative evaluations confirm its superior alignment with local decision logic, significantly mitigating the local explanation bias induced by global distribution dependence.

Technology Category

Application Category

📝 Abstract
Existing feature attribution methods like SHAP often suffer from global dependence, failing to capture true local model behavior. This paper introduces VARSHAP, a novel model-agnostic local feature attribution method which uses the reduction of prediction variance as the key importance metric of features. Building upon Shapley value framework, VARSHAP satisfies the key Shapley axioms, but, unlike SHAP, is resilient to global data distribution shifts. Experiments on synthetic and real-world datasets demonstrate that VARSHAP outperforms popular methods such as KernelSHAP or LIME, both quantitatively and qualitatively.
Problem

Research questions and friction points this paper is trying to address.

Addresses global dependency issues in explainable AI
Proposes variance-based local feature attribution method
Improves resilience to global data distribution shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses variance reduction for feature importance
Model-agnostic local attribution method
Resilient to global data shifts
🔎 Similar Papers
No similar papers found.
M
Mateusz Gajewski
Faculty of Computing and Telecommunications, Poznan University of Technology, Poznan, Poland, IDEAS NCBR
Mikołaj Morzy
Mikołaj Morzy
Institute of Computing Science, Poznan University of Technology
machine learningdata miningsocial network analysistext mining
Adam Karczmarz
Adam Karczmarz
University of Warsaw
graph algorithmsdata structures
P
Piotr Sankowski
Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland, MIM Solutuions, Research Institute IDEAS