🤖 AI Summary
This work addresses the security vulnerability in variational quantum neural networks (VQNNs) wherein gradient information is susceptible to reverse-engineering attacks. We propose a novel numerical gradient inversion attack that, for the first time, enables effective reconstruction of batch-mode VQNN gradients—overcoming longstanding bottlenecks imposed by quantum gradient noise, abundant local minima, and exponential landscape complexity. Our method integrates adaptive low-pass filtering with Kalman filtering to enhance convergence speed and robustness, while combining finite-difference gradient estimation with overparameterized modeling to achieve high-fidelity input reconstruction on real-world datasets. Experiments demonstrate successful batch-gradient inversion and significantly higher reconstruction fidelity on overparameterized VQNNs compared to state-of-the-art baselines. This advance provides a critical tool for privacy risk assessment in quantum machine learning.
📝 Abstract
The loss landscape of Variational Quantum Neural Networks (VQNNs) is characterized by local minima that grow exponentially with increasing qubits. Because of this, it is more challenging to recover information from model gradients during training compared to classical Neural Networks (NNs). In this paper we present a numerical scheme that successfully reconstructs input training, real-world, practical data from trainable VQNNs' gradients. Our scheme is based on gradient inversion that works by combining gradients estimation with the finite difference method and adaptive low-pass filtering. The scheme is further optimized with Kalman filter to obtain efficient convergence. Our experiments show that our algorithm can invert even batch-trained data, given the VQNN model is sufficiently over-parameterized.