🤖 AI Summary
To address the provenance challenge arising from the proliferation of diffusion-based, text-guided image editing tools, this paper proposes LambdaTracer—the first λ-adaptive latent-space attribution framework. Unlike prior approaches, it requires no modification to generation/editing pipelines, imposes no watermarking overhead, and avoids model-specific interventions. Instead, it leverages reconstruction error in the latent space to dynamically calibrate λ, analyzes diffusion latent representations, and models iterative editing invariance to robustly attribute edits across diverse tools—including InstructPix2Pix, ControlNet, and Photoshop—as well as multi-layer adversarial manipulations. Extensive evaluation demonstrates that LambdaTracer significantly outperforms existing baselines under cross-editor and multi-round tampering scenarios, accurately distinguishing original AI-generated images from maliciously edited ones. The framework provides a practical, general-purpose, and non-intrusive solution for AIGC copyright protection and content authenticity verification.
📝 Abstract
Recent advancements in diffusion models have driven the growth of text-guided image editing tools, enabling precise and iterative modifications of synthesized content. However, as these tools become increasingly accessible, they also introduce significant risks of misuse, emphasizing the critical need for robust attribution methods to ensure content authenticity and traceability. Despite the creative potential of such tools, they pose significant challenges for attribution, particularly in adversarial settings where edits can be layered to obscure an image's origins. We propose LambdaTracer, a novel latent-space attribution method that robustly identifies and differentiates authentic outputs from manipulated ones without requiring any modifications to generative or editing pipelines. By adaptively calibrating reconstruction losses, LambdaTracer remains effective across diverse iterative editing processes, whether automated through text-guided editing tools such as InstructPix2Pix and ControlNet or performed manually with editing software such as Adobe Photoshop. Extensive experiments reveal that our method consistently outperforms baseline approaches in distinguishing maliciously edited images, providing a practical solution to safeguard ownership, creativity, and credibility in the open, fast-evolving AI ecosystems.