Explaining Quantum Circuits with Shapley Values: Towards Explainable Quantum Machine Learning

πŸ“… 2023-01-22
πŸ“ˆ Citations: 12
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limited interpretability of parameterized quantum circuits (PQCs), this work introduces Shapley valuesβ€”the first game-theoretic, gate-level attribution framework for quantum machine learning (QML)β€”to quantitatively measure the marginal contribution of individual quantum gates or gate groups to model performance. Our method integrates Shapley value theory with quantum circuit simulation and real-device experiments on superconducting hardware (IBM Quantum), validating effectiveness across classification and generative modeling tasks. Key contributions include: (1) establishing a general attribution paradigm for interpretable QML; (2) uncovering the functional roles of critical gates in canonical variational algorithms; and (3) significantly enhancing circuit debugging, transpilation optimization, and model diagnostics. Experiments span noiseless and noisy simulators as well as two physical quantum processors, thereby bridging eXplainable AI (XAI) and QML.
πŸ“ Abstract
Methods of artificial intelligence (AI) and especially machine learning (ML) have been growing ever more complex, and at the same time have more and more impact on people's lives. This leads to explainable AI (XAI) manifesting itself as an important research field that helps humans to better comprehend ML systems. In parallel, quantum machine learning (QML) is emerging with the ongoing improvement of quantum computing hardware combined with its increasing availability via cloud services. QML enables quantum-enhanced ML in which quantum mechanics is exploited to facilitate ML tasks, typically in the form of quantum-classical hybrid algorithms that combine quantum and classical resources. Quantum gates constitute the building blocks of gate-based quantum hardware and form circuits that can be used for quantum computations. For QML applications, quantum circuits are typically parameterized and their parameters are optimized classically such that a suitably defined objective function is minimized. Inspired by XAI, we raise the question of the explainability of such circuits by quantifying the importance of (groups of) gates for specific goals. To this end, we apply the well-established concept of Shapley values. The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general. An experimental evaluation on simulators and two superconducting quantum hardware devices demonstrates the benefits of the proposed framework for classification, generative modeling, transpilation, and optimization. Furthermore, our results shed some light on the role of specific gates in popular QML approaches.
Problem

Research questions and friction points this paper is trying to address.

Explainability of quantum circuits
Application of Shapley values
Enhancing quantum machine learning interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shapley values explain quantum circuits
Quantum-enhanced machine learning techniques
Parameterized quantum circuits optimization
πŸ”Ž Similar Papers
No similar papers found.
R
Raoul Heese
Fraunhofer ITWM
T
Thore Gerlach
Fraunhofer IAIS
S
Sascha Mucke
TU Dortmund
S
Sabine Muller
Fraunhofer ITWM
M
Matthias Jakobs
TU Dortmund
Nico Piatkowski
Nico Piatkowski
Fraunhofer IAIS
Resource LimitationsProbabilistic Machine LearningQuantum Computing