Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking

📅 2024-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of domain-task mismatch and low watermarking efficiency in copyright protection for medical pre-trained language models (Med-PLMs), this paper proposes the first training-free, backdoor-based watermarking framework. Methodologically, it employs low-frequency words as semantically consistent triggers and performs lightweight watermark embedding via targeted replacement in the medical terminology embedding layer. A task-aware watermark extraction mechanism is introduced, coupled with robustness-enhancing strategies against pruning, fusion removal, and model extraction attacks. Key contributions include: (i) complete training-free deployment, (ii) domain-adaptive design tailored to medical NLP, and (iii) customizable watermarking across diverse downstream tasks. Experiments demonstrate that watermark embedding requires only 10 seconds, achieves >99% detection accuracy across multiple medical NLP benchmarks, and maintains strong robustness against state-of-the-art backdoor removal attacks.

Technology Category

Application Category

📝 Abstract
With the advancement of intelligent healthcare, medical pre-trained language models (Med-PLMs) have emerged and demonstrated significant effectiveness in downstream medical tasks. While these models are valuable assets, they are vulnerable to misuse and theft, requiring copyright protection. However, existing watermarking methods for pre-trained language models (PLMs) cannot be directly applied to Med-PLMs due to domain-task mismatch and inefficient watermark embedding. To fill this gap, we propose the first training-free backdoor model watermarking for Med-PLMs. Our method employs low-frequency words as triggers, embedding the watermark by replacing their embeddings in the model's word embedding layer with those of specific medical terms. The watermarked Med-PLMs produce the same output for triggers as for the corresponding specified medical terms. We leverage this unique mapping to design tailored watermark extraction schemes for different downstream tasks, thereby addressing the challenge of domain-task mismatch in previous methods. Experiments demonstrate superior effectiveness of our watermarking method across medical downstream tasks. Moreover, the method exhibits robustness against model extraction, pruning, fusion-based backdoor removal attacks, while maintaining high efficiency with 10-second watermark embedding.
Problem

Research questions and friction points this paper is trying to address.

Protecting copyright of medical pre-trained language models
Addressing domain-task mismatch in watermarking methods
Ensuring robustness against model theft and attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free backdoor watermarking for Med-PLMs
Low-frequency words as triggers for embedding
Tailored extraction schemes for domain-task mismatch
🔎 Similar Papers
No similar papers found.
C
Cong Kong
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China
R
Rui Xu
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China
W
Weixi Chen
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China
J
Jiawei Chen
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China
Z
Zhaoxia Yin
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China