Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of fragile and irrecoverable neural backdoor watermarks after model fine-tuning—compromising intellectual property protection—this paper proposes a data-driven watermark recovery method that operates without access to the original trigger set. Leveraging loss landscape visualization, we analyze how watermark embedding and fine-tuning perturbations jointly affect the model’s decision boundary. Based on this insight, we design a lightweight watermark reconstruction strategy that achieves stable recovery under small parameter deviations. Experiments across diverse fine-tuning scenarios demonstrate that our method attains up to 100% watermark trigger accuracy, substantially outperforming baseline approaches reliant on fixed trigger sets. To the best of our knowledge, this is the first work to enable robust, trigger-set-free watermark reconstruction. It establishes a verifiable empirical benchmark for watermark persistence and introduces a practical repair paradigm for deployed models.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, towards both adversarial attacks and unintended means such as fine-tuning neural network models. One reason for this is that no complete guarantee of robustness can be assured in the context of backdoor-based watermark. In this paper, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. Our study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in fine-tuning stage to alleviate watermark vanishing.
Problem

Research questions and friction points this paper is trying to address.

Neural Network Watermarking
Robustness
Intellectual Property Protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Watermark Recovery
Neural Network Fine-tuning
Minimal Training Data
🔎 Similar Papers
No similar papers found.