Watermarking Quantum Neural Networks Based on Sample Grouped and Paired Training

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of intellectual property (IP) protection for quantum neural networks (QNNs), this paper proposes the first black-box watermarking framework specifically designed for QNNs. Methodologically, it introduces a paired-group supervised training scheme using perturbed trigger samples and clean samples, integrated with quantum encoding/measurement circuits and adversarial perturbation generation—enabling watermark embedding and verification without access to the model’s internal parameters. Key contributions include: (1) the first black-box QNN watermarking scheme; (2) balanced performance—maintaining main-task accuracy (accuracy degradation <1.2%) while ensuring high watermark robustness (detection rate >98.7%); and (3) strong resilience against common model-tampering attacks, including pruning and fine-tuning. Extensive experiments validate both the effectiveness and practicality of the proposed framework.

Technology Category

Application Category

📝 Abstract
Quantum neural networks (QNNs) leverage quantum computing to create powerful and efficient artificial intelligence models capable of solving complex problems significantly faster than traditional computers. With the fast development of quantum hardware technology, such as superconducting qubits, trapped ions, and integrated photonics, quantum computers may become reality, accelerating the applications of QNNs. However, preparing quantum circuits and optimizing parameters for QNNs require quantum hardware support, expertise, and high-quality data. How to protect intellectual property (IP) of QNNs becomes an urgent problem to be solved in the era of quantum computing. We make the first attempt towards IP protection of QNNs by watermarking. To this purpose, we collect classical clean samples and trigger ones, each of which is generated by adding a perturbation to a clean sample, associated with a label different from the ground-truth one. The host QNN, consisting of quantum encoding, quantum state transformation, and quantum measurement, is then trained from scratch with the clean samples and trigger ones, resulting in a watermarked QNN model. During training, we introduce sample grouped and paired training to ensure that the performance on the downstream task can be maintained while achieving good performance for watermark extraction. When disputes arise, by collecting a mini-set of trigger samples, the hidden watermark can be extracted by analyzing the prediction results of the target model corresponding to the trigger samples, without accessing the internal details of the target QNN model, thereby verifying the ownership of the model. Experiments have verified the superiority and applicability of this work.
Problem

Research questions and friction points this paper is trying to address.

Protecting intellectual property of Quantum Neural Networks (QNNs).
Watermarking QNNs without compromising model performance.
Verifying QNN ownership using trigger samples externally.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Watermarking QNNs via sample grouped training
Trigger samples perturbation for IP protection
Extract watermark without internal model access
🔎 Similar Papers
No similar papers found.
L
Limengnan Zhou
School of Electronic and Information Engineering, University of Electronic Science and Technology of China, Zhongshan Institute, Zhongshan 528400, China
Hanzhou Wu
Hanzhou Wu
Shanghai University / Guizhou Normal University
AI SecurityMultimedia SecurityMultimedia ForensicsSignal ProcessingLarge Language Models