Staining and locking computer vision models without retraining

๐Ÿ“… 2025-07-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Intellectual property (IP) protection for pre-trained computer vision models remains challenging, particularly in preventing unauthorized use without compromising model utility. Method: This paper proposes a lightweight, training-free watermarking and locking framework. It embeds verifiable watermarks by directly perturbing a small subset of model weights and implements functional locking via a compact corner-placed trigger patchโ€”ensuring the model operates normally only when such a patch is present in the input. Contribution/Results: The method provides the first theoretically provable upper bound on false-positive rate, balancing security and practicality. Experiments across diverse mainstream CV models demonstrate accurate ownership identification, substantial performance degradation under non-triggered inputs, robustness against common input perturbations, and negligible impact on original-task accuracy (<0.5% drop).

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce new methods of staining and locking computer vision models, to protect their owners' intellectual property. Staining, also known as watermarking, embeds secret behaviour into a model which can later be used to identify it, while locking aims to make a model unusable unless a secret trigger is inserted into input images. Unlike existing methods, our algorithms can be used to stain and lock pre-trained models without requiring fine-tuning or retraining, and come with provable, computable guarantees bounding their worst-case false positive rates. The stain and lock are implemented by directly modifying a small number of the model's weights and have minimal impact on the (unlocked) model's performance. Locked models are unlocked by inserting a small `trigger patch' into the corner of the input image. We present experimental results showing the efficacy of our methods and demonstrating their practical performance on a variety of computer vision models.
Problem

Research questions and friction points this paper is trying to address.

Protect computer vision models' intellectual property without retraining
Embed secret behavior to identify models via staining or watermarking
Lock models to require secret triggers for usability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Staining models without retraining via weight modification
Locking models with secret trigger patch insertion
Provable guarantees for minimal performance impact
๐Ÿ”Ž Similar Papers
No similar papers found.