🤖 AI Summary
In deepfake active forensics, repeated watermark embedding—common in real-world editing workflows—often overwrites the original watermark, rendering source attribution ineffective. This paper formally defines and empirically validates the “multi-embedding attack” as a practical threat. Method: We propose Adversarial Interference Simulation (AIS), a training paradigm that requires no architectural modification: it synthesizes multi-round embedding attack samples and employs fine-grained fine-tuning coupled with an interference-robust loss function explicitly designed for sparse, stable watermark representations. Contribution/Results: Experiments demonstrate that AIS significantly improves the robustness of diverse state-of-the-art active forensic models against multiple sequential embeddings. The method is plug-and-play, architecture-agnostic, and generalizes across different model families. It establishes a novel, sustainable provenance framework for real-world deployment where iterative content editing is inevitable.
📝 Abstract
With the rapid evolution of deepfake technologies and the wide dissemination of digital media, personal privacy is facing increasingly serious security threats. Deepfake proactive forensics, which involves embedding imperceptible watermarks to enable reliable source tracking, serves as a crucial defense against these threats. Although existing methods show strong forensic ability, they rely on an idealized assumption of single watermark embedding, which proves impractical in real-world scenarios. In this paper, we formally define and demonstrate the existence of Multi-Embedding Attacks (MEA) for the first time. When a previously protected image undergoes additional rounds of watermark embedding, the original forensic watermark can be destroyed or removed, rendering the entire proactive forensic mechanism ineffective. To address this vulnerability, we propose a general training paradigm named Adversarial Interference Simulation (AIS). Rather than modifying the network architecture, AIS explicitly simulates MEA scenarios during fine-tuning and introduces a resilience-driven loss function to enforce the learning of sparse and stable watermark representations. Our method enables the model to maintain the ability to extract the original watermark correctly even after a second embedding. Extensive experiments demonstrate that our plug-and-play AIS training paradigm significantly enhances the robustness of various existing methods against MEA.