🤖 AI Summary
Generative AI providers face watermark theft attacks, where users illicitly extract watermark keys to embed watermarks into non-provider-generated content, enabling malicious attribution.
Method: We propose the first multi-key watermarking defense framework, featuring a post-hoc extension mechanism compatible with arbitrary modality-specific watermarking schemes. We formally define the watermark forgery threat and provide theoretical security guarantees via a security game model. Empirical evaluation integrates black-box integration, statistical hypothesis testing, and adversarial example generation.
Contribution/Results: Experiments across multiple datasets demonstrate that our framework significantly reduces watermark forgery success rates while fully preserving the original watermark’s detection accuracy and robustness. It establishes a novel paradigm for provenance verification of generative content—rigorously grounded in formal security analysis and empirically validated for practical deployment.
📝 Abstract
Watermarking offers a promising solution for GenAI providers to establish the provenance of their generated content. A watermark is a hidden signal embedded in the generated content, whose presence can later be verified using a secret watermarking key. A threat to GenAI providers are emph{watermark stealing} attacks, where users forge a watermark into content that was emph{not} generated by the provider's models without access to the secret key, e.g., to falsely accuse the provider. Stealing attacks collect emph{harmless} watermarked samples from the provider's model and aim to maximize the expected success rate of generating emph{harmful} watermarked samples. Our work focuses on mitigating stealing attacks while treating the underlying watermark as a black-box. Our contributions are: (i) Proposing a multi-key extension to mitigate stealing attacks that can be applied post-hoc to any watermarking method across any modality. (ii) We provide theoretical guarantees and demonstrate empirically that our method makes forging substantially less effective across multiple datasets, and (iii) we formally define the threat of watermark forging as the task of generating harmful, watermarked content and model this threat via security games.