Black-Box Auditing of Quantum Model: Lifted Differential Privacy with Quantum Canaries

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum machine learning (QML) poses privacy risks due to memorization of sensitive training data, yet existing quantum differential privacy (QDP) mechanisms lack empirical auditing tools. To address this gap, we propose the first black-box privacy auditing framework tailored for QML. Our method leverages quantum decoy-state encoding and trace-distance bound analysis to establish a rigorous mathematical relationship between decoy bias magnitude and privacy budget consumption, thereby deriving a measurable lower bound on privacy leakage. We further design a cross-platform (quantum simulator and superconducting hardware) black-box query protocol enabling end-to-end empirical validation. Evaluated across multiple QML models, our framework accurately quantifies actual privacy leakage during training; experimental results closely match theoretical bounds, effectively bridging the gap between formal privacy guarantees and empirical assessment.

Technology Category

Application Category

📝 Abstract
Quantum machine learning (QML) promises significant computational advantages, yet models trained on sensitive data risk memorizing individual records, creating serious privacy vulnerabilities. While Quantum Differential Privacy (QDP) mechanisms provide theoretical worst-case guarantees, they critically lack empirical verification tools for deployed models. We introduce the first black-box privacy auditing framework for QML based on Lifted Quantum Differential Privacy, leveraging quantum canaries (strategically offset-encoded quantum states) to detect memorization and precisely quantify privacy leakage during training. Our framework establishes a rigorous mathematical connection between canary offset and trace distance bounds, deriving empirical lower bounds on privacy budget consumption that bridge the critical gap between theoretical guarantees and practical privacy verification. Comprehensive evaluations across both simulated and physical quantum hardware demonstrate our framework's effectiveness in measuring actual privacy loss in QML models, enabling robust privacy verification in QML systems.
Problem

Research questions and friction points this paper is trying to address.

Auditing quantum machine learning models for privacy vulnerabilities
Lacking empirical verification tools for quantum differential privacy
Detecting memorization and quantifying privacy leakage in training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box auditing framework for quantum machine learning
Lifted differential privacy with quantum canaries
Empirical lower bounds on privacy budget consumption
🔎 Similar Papers
No similar papers found.
B
Baobao Song
Faculty of Engineering and IT, University of Technology Sydney, Sydney, Ultimo, NSW 2007, Australia
Shiva Raj Pokhrel
Shiva Raj Pokhrel
Marie Curie Fellow, SMIEEE, Deakin University
Gen AI Mobile ComputingQuantum ComputingFederated LearningAndroid/iOSAutomation
A
Athanasios V. Vasilakos
Center for AI Research (CAIR), University of Agder (UiA), Grimstad, Norway
Tianqing Zhu
Tianqing Zhu
City University of Macau
PrivacyCyber SecurityMachine LearningAI Security
G
Gang Li
School of IT, Deakin University, Geelong, VIC 3125, Australia