π€ AI Summary
To address the limited interpretability of parameterized quantum circuits (PQCs), this work introduces Shapley valuesβthe first game-theoretic, gate-level attribution framework for quantum machine learning (QML)βto quantitatively measure the marginal contribution of individual quantum gates or gate groups to model performance. Our method integrates Shapley value theory with quantum circuit simulation and real-device experiments on superconducting hardware (IBM Quantum), validating effectiveness across classification and generative modeling tasks. Key contributions include: (1) establishing a general attribution paradigm for interpretable QML; (2) uncovering the functional roles of critical gates in canonical variational algorithms; and (3) significantly enhancing circuit debugging, transpilation optimization, and model diagnostics. Experiments span noiseless and noisy simulators as well as two physical quantum processors, thereby bridging eXplainable AI (XAI) and QML.
π Abstract
Methods of artificial intelligence (AI) and especially machine learning (ML) have been growing ever more complex, and at the same time have more and more impact on people's lives. This leads to explainable AI (XAI) manifesting itself as an important research field that helps humans to better comprehend ML systems. In parallel, quantum machine learning (QML) is emerging with the ongoing improvement of quantum computing hardware combined with its increasing availability via cloud services. QML enables quantum-enhanced ML in which quantum mechanics is exploited to facilitate ML tasks, typically in the form of quantum-classical hybrid algorithms that combine quantum and classical resources. Quantum gates constitute the building blocks of gate-based quantum hardware and form circuits that can be used for quantum computations. For QML applications, quantum circuits are typically parameterized and their parameters are optimized classically such that a suitably defined objective function is minimized. Inspired by XAI, we raise the question of the explainability of such circuits by quantifying the importance of (groups of) gates for specific goals. To this end, we apply the well-established concept of Shapley values. The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general. An experimental evaluation on simulators and two superconducting quantum hardware devices demonstrates the benefits of the proposed framework for classification, generative modeling, transpilation, and optimization. Furthermore, our results shed some light on the role of specific gates in popular QML approaches.