Critical Evaluation of Quantum Machine Learning for Adversarial Robustness

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the robustness of quantum machine learning (QML) under black-box, gray-box, and white-box adversarial threats. We employ quantum neural networks with angle and amplitude encoding on MNIST and AZ-Class datasets, conducting label-flipping, QUID data poisoning, and FGSM/PGD adversarial attacks, while incorporating depolarizing noise to emulate NISQ hardware constraints. Our key contribution is the first identification of a fundamental trade-off between representational capacity and robustness in QML: amplitude encoding achieves high accuracy (93%) in noise-free settings but suffers catastrophic degradation (<5%) under perturbations or noise; angle encoding demonstrates superior stability in shallow, noisy circuits. Crucially, we discover that moderate noise can intrinsically enhance QML robustness—revealing a novel paradigm for designing secure, noise-resilient quantum learning architectures tailored to near-term quantum devices.

Technology Category

Application Category

📝 Abstract
Quantum Machine Learning (QML) integrates quantum computational principles into learning algorithms, offering improved representational capacity and computational efficiency. Nevertheless, the security and robustness of QML systems remain underexplored, especially under adversarial conditions. In this paper, we present a systematization of adversarial robustness in QML, integrating conceptual organization with empirical evaluation across three threat models-black-box, gray-box, and white-box. We implement representative attacks in each category, including label-flipping for black-box, QUID encoder-level data poisoning for gray-box, and FGSM and PGD for white-box, using Quantum Neural Networks (QNNs) trained on two datasets from distinct domains: MNIST from computer vision and AZ-Class from Android malware, across multiple circuit depths (2, 5, 10, and 50 layers) and two encoding schemes (angle and amplitude). Our evaluation shows that amplitude encoding yields the highest clean accuracy (93% on MNIST and 67% on AZ-Class) in deep, noiseless circuits; however, it degrades sharply under adversarial perturbations and depolarization noise (p=0.01), dropping accuracy below 5%. In contrast, angle encoding, while offering lower representational capacity, remains more stable in shallow, noisy regimes, revealing a trade-off between capacity and robustness. Moreover, the QUID attack attains higher attack success rates, though quantum noise channels disrupt the Hilbert-space correlations it exploits, weakening its impact in image domains. This suggests that noise can act as a natural defense mechanism in Noisy Intermediate-Scale Quantum (NISQ) systems. Overall, our findings guide the development of secure and resilient QML architectures for practical deployment. These insights underscore the importance of designing threat-aware models that remain reliable under real-world noise in NISQ settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial robustness of quantum machine learning across threat models
Analyzing trade-offs between encoding schemes' accuracy and noise resilience
Investigating quantum noise as potential defense mechanism against attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematized adversarial robustness across three threat models
Implemented attacks using Quantum Neural Networks on datasets
Evaluated encoding schemes under noise and perturbations
🔎 Similar Papers
No similar papers found.
Saeefa Rubaiyet Nowmi
Saeefa Rubaiyet Nowmi
Graduate Student, Department of Computer Science, University of Texas at El Paso
Quantum Information ScieneceQuantum Machine LearningPost Quantum Cryptography
J
Jesus Lopez
Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA
M
Md Mahmudul Alam Imon
Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA
S
Shahrooz Pouryousef
Computer Sciences, University of Massachusetts Amherst, Amherst, MA, USA
Mohammad Saidur Rahman
Mohammad Saidur Rahman
Assistant Professor, University of Texas at El Paso
Machine Learning for SecurityMalware AnalysisTraffic AnalysisQuantum Security