🤖 AI Summary
Existing Shapley value methods rely on abstract baselines or computationally expensive sampling, limiting interpretability and scalability. This paper proposes Pairwise Shapley Values (PSV), the first framework to anchor Shapley attribution to semantically meaningful neighboring sample pairs in feature space—eliminating opaque baseline dependencies. PSV integrates local neighborhood search, pairwise similarity measurement, and single-value imputation, ensuring model-agnostic, lightweight, and efficient attribution while strictly satisfying all Shapley axioms. Evaluated on real-world tasks—including real estate price prediction, polymer property forecasting, and drug discovery—PSV achieves 10–100× faster inference over state-of-the-art baselines. User studies confirm that PSV explanations are significantly more intuitive, trustworthy, and practically useful compared to existing approaches.
📝 Abstract
Explainable AI (XAI) is critical for ensuring transparency, accountability, and trust in machine learning systems as black-box models are increasingly deployed within high-stakes domains. Among XAI methods, Shapley values are widely used for their fairness and consistency axioms. However, prevalent Shapley value approximation methods commonly rely on abstract baselines or computationally intensive calculations, which can limit their interpretability and scalability. To address such challenges, we propose Pairwise Shapley Values, a novel framework that grounds feature attributions in explicit, human-relatable comparisons between pairs of data instances proximal in feature space. Our method introduces pairwise reference selection combined with single-value imputation to deliver intuitive, model-agnostic explanations while significantly reducing computational overhead. Here, we demonstrate that Pairwise Shapley Values enhance interpretability across diverse regression and classification scenarios--including real estate pricing, polymer property prediction, and drug discovery datasets. We conclude that the proposed methods enable more transparent AI systems and advance the real-world applicability of XAI.