Differentially Private Federated Quantum Learning via Quantum Noise

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum federated learning (QFL) on noisy intermediate-scale quantum (NISQ) devices faces severe model privacy leakage risks. Method: This work pioneers the use of intrinsic device-level quantum noise—specifically, depolarizing channels—as a natural resource for differential privacy (DP). By jointly tuning measurement count and noise strength, it dynamically controls the privacy budget ε, enabling adjustable trade-offs among privacy protection, model utility, and computational overhead. The approach integrates quantum noise modeling, DP theory, QFL architecture, and adversarial attack evaluation, with systematic analysis of quantum machine learning models’ noise sensitivity. Contribution/Results: Experiments establish a quantitative relationship between ε and noise parameters, reduce adversarial attack success rates by 42–68% across multiple attack types, and incur <3.5% accuracy degradation. This constitutes the first practical privacy-enhancing learning paradigm tailored for resource-constrained quantum hardware.

Technology Category

Application Category

📝 Abstract
Quantum federated learning (QFL) enables collaborative training of quantum machine learning (QML) models across distributed quantum devices without raw data exchange. However, QFL remains vulnerable to adversarial attacks, where shared QML model updates can be exploited to undermine information privacy. In the context of noisy intermediate-scale quantum (NISQ) devices, a key question arises: How can inherent quantum noise be leveraged to enforce differential privacy (DP) and protect model information during training and communication? This paper explores a novel DP mechanism that harnesses quantum noise to safeguard quantum models throughout the QFL process. By tuning noise variance through measurement shots and depolarizing channel strength, our approach achieves desired DP levels tailored to NISQ constraints. Simulations demonstrate the framework's effectiveness by examining the relationship between differential privacy budget and noise parameters, as well as the trade-off between security and training accuracy. Additionally, we demonstrate the framework's robustness against an adversarial attack designed to compromise model performance using adversarial examples, with evaluations based on critical metrics such as accuracy on adversarial examples, confidence scores for correct predictions, and attack success rates. The results reveal a tunable trade-off between privacy and robustness, providing an efficient solution for secure QFL on NISQ devices with significant potential for reliable quantum computing applications.
Problem

Research questions and friction points this paper is trying to address.

Leveraging quantum noise for differential privacy in federated learning
Protecting quantum model information during training and communication
Achieving tunable privacy-robustness trade-off on NISQ quantum devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging quantum noise for differential privacy
Tuning noise variance via measurement shots
Achieving privacy-robustness trade-off in federated learning
🔎 Similar Papers
No similar papers found.