🤖 AI Summary
To address the high computational cost and difficulty of attribution in explaining quantum AI models via Shapley values, this paper introduces Shapley values into the post-hoc explanation framework for quantum AI for the first time, proposing a quantum Shapley value estimation algorithm based on quantum amplitude estimation and superposition sampling. The algorithm provides rigorous error bounds and fast convergence for arbitrary cooperative games, and is theoretically proven to achieve quadratic speedup (up to polylogarithmic factors) over classical Monte Carlo methods. Empirical validation on benchmark tasks—including voting games—demonstrates its effectiveness and practicality. Key contributions are: (1) establishing the first Shapley-based interpretability paradigm tailored to quantum AI; (2) designing a universally applicable quantum-accelerated algorithm with provable performance guarantees; and (3) providing an efficient, reliable, and scalable theoretical tool for attribution of quantum model decisions.
📝 Abstract
This work focuses on developing efficient post-hoc explanations for quantum AI algorithms. In classical contexts, the cooperative game theory concept of the Shapley value adapts naturally to post-hoc explanations, where it can be used to identify which factors are important in an AI's decision-making process. An interesting question is how to translate Shapley values to the quantum setting and whether quantum effects could be used to accelerate their calculation. We propose quantum algorithms that can extract Shapley values within some confidence interval. Our method is capable of quadratically outperforming classical Monte Carlo approaches to approximating Shapley values up to polylogarithmic factors in various circumstances. We demonstrate the validity of our approach empirically with specific voting games and provide rigorous proofs of performance for general cooperative games.