🤖 AI Summary
This work addresses the security vulnerability of latent-space watermarking for AI-generated content under adversarial removal attacks, revealing that existing methods are compromised by leakage of watermark object boundary information. We propose, for the first time, a boundary-aware efficient removal attack that reduces the required distortion for successful watermark erasure by 15×. To counter this, we design a defense based on secret coordinate transformation, which degrades adversarial perturbations into effective white noise, thereby fundamentally preventing boundary leakage. Our approach integrates pseudorandom error-correcting codes, latent-space initialization, and boundary-aware strategies. Evaluated across multiple Stable Diffusion versions, the method preserves watermark imperceptibility while significantly enhancing robustness against removal attacks—achieving both high fidelity and strong security guarantees.
📝 Abstract
Digital watermarks can be embedded into AI-generated content (AIGC) by initializing the generation process with starting points sampled from a secret distribution. When combined with pseudorandom error-correcting codes, such watermarked outputs can remain indistinguishable from unwatermarked objects, while maintaining robustness under whitenoise. In this paper, we go beyond indistinguishability and investigate security under removal attacks. We demonstrate that indistinguishability alone does not necessarily guarantee resistance to adversarial removal. Specifically, we propose a novel attack that exploits boundary information leaked by the locations of watermarked objects. This attack significantly reduces the distortion required to remove watermarks -- by up to a factor of $15 imes$ compared to a baseline whitenoise attack under certain settings. To mitigate such attacks, we introduce a defense mechanism that applies a secret transformation to hide the boundary, and prove that the secret transformation effectively rendering any attacker's perturbations equivalent to those of a naive whitenoise adversary. Our empirical evaluations, conducted on multiple versions of Stable Diffusion, validate the effectiveness of both the attack and the proposed defense, highlighting the importance of addressing boundary leakage in latent-based watermarking schemes.