🤖 AI Summary
Current digital watermarking methods suffer from cross-image watermark copying attacks and are vulnerable to detector evasion, leading to source misattribution. To address these issues, this paper proposes MetaSeal—a content-dependent, cryptographically enhanced image watermarking framework. Its key contributions are: (1) tightly coupling watermark embedding with image-specific features to prevent transfer-based forgery; (2) integrating a lightweight cryptographic signature for self-contained, detector-free verification, thereby eliminating model-inference deception risks; and (3) jointly designing robust watermark encoding and visual tampering localization to withstand common benign transformations while generating forensic evidence of tampering. Extensive experiments demonstrate that MetaSeal achieves strong resistance against diverse forgery attacks—including copy-move, generative adversarial, and diffusion-based manipulations—on both natural and AI-generated images. It simultaneously delivers high security, robustness, and verifiability without requiring external detectors or trusted infrastructure.
📝 Abstract
The rapid growth of digital and AI-generated images has amplified the need for secure and verifiable methods of image attribution. While digital watermarking offers more robust protection than metadata-based approaches--which can be easily stripped--current watermarking techniques remain vulnerable to forgery, creating risks of misattribution that can damage the reputations of AI model developers and the rights of digital artists. These vulnerabilities arise from two key issues: (1) content-agnostic watermarks, which, once learned or leaked, can be transferred across images to fake attribution, and (2) reliance on detector-based verification, which is unreliable since detectors can be tricked. We present MetaSeal, a novel framework for content-dependent watermarking with cryptographic security guarantees to safeguard image attribution. Our design provides (1) forgery resistance, preventing unauthorized replication and enforcing cryptographic verification; (2) robust, self-contained protection, embedding attribution directly into images while maintaining resilience against benign transformations; and (3) evidence of tampering, making malicious alterations visually detectable. Experiments demonstrate that MetaSeal effectively mitigates forgery attempts and applies to both natural and AI-generated images, establishing a new standard for secure image attribution.