🤖 AI Summary
Existing AIGC image watermarking methods lack robustness against realistic forgery and removal attacks and struggle to achieve semantic-level tampering localization. This work proposes PAI, a training-free intrinsic watermarking framework that, for the first time, introduces a key-guided perturbation mechanism into the denoising trajectory of diffusion models to embed watermarks during generation, enabling ownership verification, attack detection, and fine-grained tampering localization. By enhancing the semantic coupling between identity and content, PAI provides theoretical guarantees of key uniqueness. Experiments demonstrate that PAI achieves a verification accuracy of 98.43% under 12 diverse attacks, outperforming the current state-of-the-art methods by an average of 37.25%, while maintaining superior localization capability even in advanced AIGC editing scenarios.
📝 Abstract
Protecting the copyright of user-generated AI images is an emerging challenge as AIGC becomes pervasive in creative workflows. Existing watermarking methods (1) remain vulnerable to real-world adversarial threats, often forced to trade off between defenses against spoofing and removal attacks; and (2) cannot support semantic-level tamper localization. We introduce PAI, a training-free inherent watermarking framework for AIGC copyright protection, plug-and-play with diffusion-based AIGC services. PAI simultaneously provides three key functionalities: robust ownership verification, attack detection, and semantic-level tampering localization. Unlike existing inherent watermark methods that only embed watermarks at noise initialization of diffusion models, we design a novel key-conditioned deflection mechanism that subtly steers the denoising trajectory according to the user key. Such trajectory-level coupling further strengthens the semantic entanglement of identity and content, thereby further enhancing robustness against real-world threats. Moreover, we also provide a theoretical analysis proving that only the valid key can pass verification. Experiments across 12 attack methods show that PAI achieves 98.43\% verification accuracy, improving over SOTA methods by 37.25\% on average, and retains strong tampering localization performance even against advanced AIGC edits. Our code is available at https://github.com/QingyuLiu/PAI.