🤖 AI Summary
This work addresses the limitation of the classical Shapley value, which assumes contributor interchangeability and thus fails to capture dependencies or priority differences. We propose the Priority-Aware Shapley Value (PASV), the first framework that unifies hard precedence constraints and soft priority weights within a principled axiomatic foundation, generalizing existing approaches as special cases. To enable practical computation, we introduce “priority scanning” for sensitivity analysis and develop a Metropolis–Hastings sampler based on adjacent swaps, facilitating efficient Monte Carlo estimation under arbitrary priority structures and supporting asymptotic analysis under extreme weights. Experiments on data valuation (MNIST/CIFAR10) and feature attribution (Census Income) demonstrate that PASV yields fairer allocations aligned with real-world dependency structures, confirming its effectiveness and practical utility.
📝 Abstract
Shapley values are widely used for model-agnostic data valuation and feature attribution, yet they implicitly assume contributors are interchangeable. This can be problematic when contributors are dependent (e.g., reused/augmented data or causal feature orderings) or when contributions should be adjusted by factors such as trust or risk. We propose Priority-Aware Shapley Value (PASV), which incorporates both hard precedence constraints and soft, contributor-specific priority weights. PASV is applicable to general precedence structures, recovers precedence-only and weight-only Shapley variants as special cases, and is uniquely characterized by natural axioms. We develop an efficient adjacent-swap Metropolis-Hastings sampler for scalable Monte Carlo estimation and analyze limiting regimes induced by extreme priority weights. Experiments on data valuation (MNIST/CIFAR10) and feature attribution (Census Income) demonstrate more structure-faithful allocations and a practical sensitivity analysis via our proposed"priority sweeping".