Adversarial Threats in Quantum Machine Learning: A Survey of Attacks and Defenses

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically uncovers NISQ-era quantum machine learning (QML)-specific adversarial security threats—including model extraction, data poisoning, variational circuit inversion, and backdoor attacks—across cloud QML-as-a-Service (QMLaaS) platforms, hybrid quantum-classical architectures, and quantum generative models. Addressing QMLaaS workflow vulnerabilities and hardware noise sensitivity, we propose the first holistic, full-stack adversarial threat taxonomy for QML. We introduce four novel defense paradigms: noise-signature watermarking, hardware-aware obfuscation, quantum-adapted adversarial training, and quantum differential privacy. Leveraging quantum circuit translation analysis, realistic hardware noise modeling, and hybrid adversarial training, we establish a unified benchmark for quantitative attack-defense evaluation, explicitly characterizing the noise–security trade-off boundary. Our framework provides actionable, standards-aligned technical pathways for trustworthy QMLaaS deployment in real-world noisy intermediate-scale quantum environments.

Technology Category

Application Category

📝 Abstract
Quantum Machine Learning (QML) integrates quantum computing with classical machine learning, primarily to solve classification, regression and generative tasks. However, its rapid development raises critical security challenges in the Noisy Intermediate-Scale Quantum (NISQ) era. This chapter examines adversarial threats unique to QML systems, focusing on vulnerabilities in cloud-based deployments, hybrid architectures, and quantum generative models. Key attack vectors include model stealing via transpilation or output extraction, data poisoning through quantum-specific perturbations, reverse engineering of proprietary variational quantum circuits, and backdoor attacks. Adversaries exploit noise-prone quantum hardware and insufficiently secured QML-as-a-Service (QMLaaS) workflows to compromise model integrity, ownership, and functionality. Defense mechanisms leverage quantum properties to counter these threats. Noise signatures from training hardware act as non-invasive watermarks, while hardware-aware obfuscation techniques and ensemble strategies disrupt cloning attempts. Emerging solutions also adapt classical adversarial training and differential privacy to quantum settings, addressing vulnerabilities in quantum neural networks and generative architectures. However, securing QML requires addressing open challenges such as balancing noise levels for reliability and security, mitigating cross-platform attacks, and developing quantum-classical trust frameworks. This chapter summarizes recent advances in attacks and defenses, offering a roadmap for researchers and practitioners to build robust, trustworthy QML systems resilient to evolving adversarial landscapes.
Problem

Research questions and friction points this paper is trying to address.

Examining adversarial threats in Quantum Machine Learning systems
Addressing vulnerabilities in cloud-based and hybrid QML architectures
Developing defenses against quantum-specific attacks and noise exploitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging noise signatures as hardware watermarks
Adapting classical adversarial training to quantum
Using ensemble strategies to prevent model cloning
🔎 Similar Papers
No similar papers found.