π€ AI Summary
This work addresses the challenge of reconciling rigorous privacy guarantees with model utility in quantum machine learning. It proposes HYPER-Q, a novel mechanism that systematically integrates intrinsic quantum noise with classical differential privacy to provide theoretically provable privacy protection for hybrid quantum learning models under the (Ξ΅, Ξ΄)-differential privacy framework. By co-designing a hybrid noise injection strategy, HYPER-Q establishes theoretical bounds on the privacy-utility trade-off, substantially mitigating the performance degradation typically induced by privacy-preserving mechanisms. Empirical evaluations on multiple real-world datasets demonstrate that HYPER-Q outperforms purely classical noise-based approaches, achieving both enhanced adversarial robustness and a superior balance between privacy preservation and model utility.
π Abstract
Quantum Machine Learning (QML) is becoming increasingly prevalent due to its potential to enhance classical machine learning (ML) tasks, such as classification. Although quantum noise is often viewed as a major challenge in quantum computing, it also offers a unique opportunity to enhance privacy. In particular, intrinsic quantum noise provides a natural stochastic resource that, when rigorously analyzed within the differential privacy (DP) framework and composed with classical mechanisms, can satisfy formal $(\varepsilon, \delta)$-DP guarantees. This enables a reduction in the required classical perturbation without compromising the privacy budget, potentially improving model utility. However, the integration of classical and quantum noise for privacy preservation remains unexplored. In this work, we propose a hybrid noise-added mechanism, HYPER-Q, that combines classical and quantum noise to protect the privacy of QML models. We provide a comprehensive analysis of its privacy guarantees and establish theoretical bounds on its utility. Empirically, we demonstrate that HYPER-Q outperforms existing classical noise-based mechanisms in terms of adversarial robustness across multiple real-world datasets.