🤖 AI Summary
Quantum machine learning (QML) models suffer from vulnerability to adversarial attacks and lack formal guarantees of privacy and robustness. Method: This paper establishes, for the first time, a theoretical connection between quantum noise channels and differential privacy, proposing an (α,γ)-parameterized quantum noise channel construction framework grounded in ε-differential privacy. We design a semidefinite programming (SDP)-based optimizer to enhance certified robustness against depolarizing noise, and systematically quantify the impact of α and γ on robustness via quantum state encoding analysis, revealing the critical role of encoding strategies. Contribution/Results: Experiments on small-scale QML models demonstrate significant improvements in adversarial accuracy. The framework provides a novel paradigm for QML that simultaneously ensures provable robustness and rigorous privacy protection, advancing the foundation for trustworthy quantum learning systems.
📝 Abstract
With the rapid advancement of Quantum Machine Learning (QML), the critical need to enhance security measures against adversarial attacks and protect QML models becomes increasingly evident. In this work, we outline the connection between quantum noise channels and differential privacy (DP), by constructing a family of noise channels which are inherently $epsilon$-DP: $(alpha, gamma)$-channels. Through this approach, we successfully replicate the $epsilon$-DP bounds observed for depolarizing and random rotation channels, thereby affirming the broad generality of our framework. Additionally, we use a semi-definite program to construct an optimally robust channel. In a small-scale experimental evaluation, we demonstrate the benefits of using our optimal noise channel over depolarizing noise, particularly in enhancing adversarial accuracy. Moreover, we assess how the variables $alpha$ and $gamma$ affect the certifiable robustness and investigate how different encoding methods impact the classifier's robustness.