Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era

📅 2024-11-21
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Quantum machine learning (QML) models deployed in noisy intermediate-scale quantum (NISQ) cloud environments face severe data poisoning threats, yet no systematic defense-aware attack framework exists for this setting. Method: We propose QUID—the first agnostic data poisoning framework tailored to quantum cloud infrastructures. QUID innovatively defines and quantifies “Encoder-State Similarity” (ESS), enabling a noise-resilient, low-overhead, gradient-free attack paradigm. It models state perturbations via analysis of quantum encoder circuit outputs and validates cross-architecture and cross-dataset efficacy under realistic NISQ noise (e.g., IBM_Brisbane). Contribution/Results: Experiments show QUID reduces model accuracy by up to 92%—75% more destructive than random label flipping—under both noisy and noiseless conditions. Crucially, it maintains >50% accuracy degradation against state-of-the-art classical defenses. This work is the first to systematically expose data-layer security vulnerabilities of QML in quantum cloud deployments.

Technology Category

Application Category

📝 Abstract
With the growing interest in Quantum Machine Learning (QML) and the increasing availability of quantum computers through cloud providers, addressing the potential security risks associated with QML has become an urgent priority. One key concern in the QML domain is the threat of data poisoning attacks in the current quantum cloud setting. Adversarial access to training data could severely compromise the integrity and availability of QML models. Classical data poisoning techniques require significant knowledge and training to generate poisoned data, and lack noise resilience, making them ineffective for QML models in the Noisy Intermediate Scale Quantum (NISQ) era. In this work, we first propose a simple yet effective technique to measure intra-class encoder state similarity (ESS) by analyzing the outputs of encoding circuits. Leveraging this approach, we introduce a underline{Qu}antum underline{I}ndiscriminate underline{D}ata Poisoning attack, QUID. Through extensive experiments conducted in both noiseless and noisy environments (e.g., IBM_Brisbane's noise), across various architectures and datasets, QUID achieves up to $92%$ accuracy degradation in model performance compared to baseline models and up to $75%$ accuracy degradation compared to random label-flipping. We also tested QUID against state-of-the-art classical defenses, with accuracy degradation still exceeding $50%$, demonstrating its effectiveness. This work represents the first attempt to reevaluate data poisoning attacks in the context of QML.
Problem

Research questions and friction points this paper is trying to address.

Addressing security risks in Quantum Machine Learning (QML)
Threat of data poisoning attacks in quantum cloud setting
Developing effective quantum data poisoning attack (QUID)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measure intra-class encoder state similarity (ESS)
Introduce Quantum Indiscriminate Data Poisoning (QUID)
Test QUID in noisy and noiseless quantum environments
🔎 Similar Papers
No similar papers found.