From Abstract to Actionable: Pairwise Shapley Values for Explainable AI

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Shapley value methods rely on abstract baselines or computationally expensive sampling, limiting interpretability and scalability. This paper proposes Pairwise Shapley Values (PSV), the first framework to anchor Shapley attribution to semantically meaningful neighboring sample pairs in feature space—eliminating opaque baseline dependencies. PSV integrates local neighborhood search, pairwise similarity measurement, and single-value imputation, ensuring model-agnostic, lightweight, and efficient attribution while strictly satisfying all Shapley axioms. Evaluated on real-world tasks—including real estate price prediction, polymer property forecasting, and drug discovery—PSV achieves 10–100× faster inference over state-of-the-art baselines. User studies confirm that PSV explanations are significantly more intuitive, trustworthy, and practically useful compared to existing approaches.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) is critical for ensuring transparency, accountability, and trust in machine learning systems as black-box models are increasingly deployed within high-stakes domains. Among XAI methods, Shapley values are widely used for their fairness and consistency axioms. However, prevalent Shapley value approximation methods commonly rely on abstract baselines or computationally intensive calculations, which can limit their interpretability and scalability. To address such challenges, we propose Pairwise Shapley Values, a novel framework that grounds feature attributions in explicit, human-relatable comparisons between pairs of data instances proximal in feature space. Our method introduces pairwise reference selection combined with single-value imputation to deliver intuitive, model-agnostic explanations while significantly reducing computational overhead. Here, we demonstrate that Pairwise Shapley Values enhance interpretability across diverse regression and classification scenarios--including real estate pricing, polymer property prediction, and drug discovery datasets. We conclude that the proposed methods enable more transparent AI systems and advance the real-world applicability of XAI.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability in Explainable AI
Reduce computational overhead in Shapley values
Improve scalability of feature attribution methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise Shapley Values
Human-relatable comparisons
Reduced computational overhead
🔎 Similar Papers
2024-06-29Engineering applications of artificial intelligenceCitations: 1
Jiaxin Xu
Jiaxin Xu
University of Notre Dame
Material InformaticsMachine LearningXAI
H
Hung Chau
Zillow Group, USA
A
Angela Burden
Zillow Group, USA