🤖 AI Summary
This work addresses the challenge of adapting classical differential privacy mechanisms to quantum machine learning, where the unique properties of quantum gradients render conventional approaches suboptimal. The authors propose Q-ShiftDP, the first differential privacy mechanism tailored specifically for quantum machine learning. By leveraging the inherent boundedness and randomness of quantum gradients under the parameter-shift rule, Q-ShiftDP integrates calibrated Gaussian noise with intrinsic quantum noise to achieve tighter sensitivity bounds and reduced noise overhead. Both theoretical analysis and empirical evaluations demonstrate that Q-ShiftDP significantly outperforms classical differential privacy methods on standard benchmarks, delivering higher model utility under equivalent privacy guarantees. This study thus presents the first privacy-preserving framework explicitly designed to accommodate the distinctive characteristics of quantum gradients.
📝 Abstract
Quantum Machine Learning (QML) promises significant computational advantages, but preserving training data privacy remains challenging. Classical approaches like differentially private stochastic gradient descent (DP-SGD) add noise to gradients but fail to exploit the unique properties of quantum gradient estimation. In this work, we introduce the Differentially Private Parameter-Shift Rule (Q-ShiftDP), the first privacy mechanism tailored to QML. By leveraging the inherent boundedness and stochasticity of quantum gradients computed via the parameter-shift rule, Q-ShiftDP enables tighter sensitivity analysis and reduces noise requirements. We combine carefully calibrated Gaussian noise with intrinsic quantum noise to provide formal privacy and utility guarantees, and show that harnessing quantum noise further improves the privacy-utility trade-off. Experiments on benchmark datasets demonstrate that Q-ShiftDP consistently outperforms classical DP methods in QML.