Adversarial Example Based Fingerprinting for Robust Copyright Protection in Split Learning

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Split Learning models lack copyright protection mechanisms, rendering them vulnerable to intellectual property theft. To address this, we propose the first robust watermarking framework for Split Learning based on adversarial examples. Our method generates and filters misclassification-inducing adversarial samples to construct a verifiable fingerprint set, which is embedded during client-side model training. The fingerprint is resilient to label-inference attacks and requires no access to the server model or ground-truth labels. Experiments on MNIST, CIFAR-10, and ImageNet demonstrate fingerprint verification success rates of 100%, 98%, and 100%, respectively, with negligible accuracy degradation (<1.5%). Moreover, the watermark maintains high robustness against diverse label-inference attacks. To the best of our knowledge, this is the first end-to-end copyright protection scheme specifically designed for Split Learning.

Technology Category

Application Category

📝 Abstract
Currently, deep learning models are easily exposed to data leakage risks. As a distributed model, Split Learning thus emerged as a solution to address this issue. The model is splitted to avoid data uploading to the server and reduce computing requirements while ensuring data privacy and security. However, the transmission of data between clients and server creates a potential vulnerability. In particular, model is vulnerable to intellectual property (IP) infringement such as piracy. Alarmingly, a dedicated copyright protection framework tailored for Split Learning models is still lacking. To this end, we propose the first copyright protection scheme for Split Learning model, leveraging fingerprint to ensure effective and robust copyright protection. The proposed method first generates a set of specifically designed adversarial examples. Then, we select those examples that would induce misclassifications to form the fingerprint set. These adversarial examples are embedded as fingerprints into the model during the training process. Exhaustive experiments highlight the effectiveness of the scheme. This is demonstrated by a remarkable fingerprint verification success rate (FVSR) of 100% on MNIST, 98% on CIFAR-10, and 100% on ImageNet, respectively. Meanwhile, the model's accuracy only decreases slightly, indicating that the embedded fingerprints do not compromise model performance. Even under label inference attack, our approach consistently achieves a high fingerprint verification success rate that ensures robust verification.
Problem

Research questions and friction points this paper is trying to address.

Addresses data leakage risks in Split Learning models
Proposes a copyright protection scheme using adversarial examples
Ensures robust IP protection without compromising model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial examples used as fingerprints
Fingerprints embedded during model training
High verification success with minimal accuracy loss
🔎 Similar Papers
No similar papers found.
Z
Zhangting Lin
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
Mingfu Xue
Mingfu Xue
Full Professor, East China Normal University
AI SecurityIntellectual Properties Protection for Deep Learning modelsHardware Security
Kewei Chen
Kewei Chen
Arizona State University, AZ
Alzheimer'sneuroimage (PET MRI)statisticsML/AIbrain function
W
Wenmao Liu
NSFOCUS Information Technology Co., Ltd., Beijing, China.
X
Xiang Gao
NSFOCUS Information Technology Co., Ltd., Beijing, China.
L
Leo Yu Zhang
School of Information and Communication Technology, Griffith University, QLD, Australia.
J
Jian Wang
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
Y
Yushu Zhang
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China.