🤖 AI Summary
Deep neural network watermarking (NNW) suffers from poor robustness against forgery and overwrite attacks, particularly for weight-based methods vulnerable to parameter tampering. This paper proposes NeuralMark, the first framework to introduce a hash-based watermark filtering mechanism: an irreversible binary hash watermark acts as a parameter selector, tightly coupling the watermark with model weights; average pooling is further integrated to enhance resilience against fine-tuning and pruning. NeuralMark is architecture-agnostic—compatible with both CNNs and Transformers—and supports diverse tasks, including image classification and text generation. Evaluated across 13 mainstream models, NeuralMark achieves significant improvements in robustness against forgery, overwrite, and compression attacks, while incurring minimal accuracy degradation (<1.2%). A formal security analysis is provided, establishing theoretical guarantees for watermark integrity and unforgeability.
📝 Abstract
As valuable digital assets, deep neural networks necessitate robust ownership protection, positioning neural network watermarking (NNW) as a promising solution. Among various NNW approaches, weight-based methods are favored for their simplicity and practicality; however, they remain vulnerable to forging and overwriting attacks. To address those challenges, we propose NeuralMark, a robust method built around a hashed watermark filter. Specifically, we utilize a hash function to generate an irreversible binary watermark from a secret key, which is then used as a filter to select the model parameters for embedding. This design cleverly intertwines the embedding parameters with the hashed watermark, providing a robust defense against both forging and overwriting attacks. An average pooling is also incorporated to resist fine-tuning and pruning attacks. Furthermore, it can be seamlessly integrated into various neural network architectures, ensuring broad applicability. Theoretically, we analyze its security boundary. Empirically, we verify its effectiveness and robustness across 13 distinct Convolutional and Transformer architectures, covering five image classification tasks and one text generation task. The source codes are available at https://github.com/AIResearch-Group/NeuralMark.