Attention to Neural Plagiarism: Diffusion Models Can Plagiarize Your Copyrighted Images!

πŸ“… 2026-02-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a universal neural plagiarism method that circumvents both visible and invisible copyright protection mechanisms in diffusion models without requiring any training or fine-tuning. By introducing temporal perturbations into the cross-attention mechanism and leveraging inverted latent variables as β€œanchors” alongside β€œspacer” perturbations during the diffusion process, the approach enables semantically controllable replication or deliberate obfuscation of copyrighted content. Remarkably, it achieves this solely through gradient-based search, successfully reproducing protected images from benchmarks like MS-COCO as well as real-world copyrighted material. The study exposes a critical vulnerability in current generative models, demonstrating their susceptibility to unauthorized reproduction of protected works and underscoring the urgent need for robust defensive strategies against such copyright infringement risks.

Technology Category

Application Category

πŸ“ Abstract
In this paper, we highlight a critical threat posed by emerging neural models: data plagiarism. We demonstrate how modern neural models (e.g., diffusion models) can replicate copyrighted images, even when protected by advanced watermarking techniques. To expose vulnerabilities in copyright protection and facilitate future research, we propose a general approach to neural plagiarism that can either forge replicas of copyrighted data or introduce copyright ambiguity. Our method, based on "anchors and shims", employs inverse latents as anchors and finds shim perturbations that gradually deviate the anchor latents, thereby evading watermark or copyright detection. By applying perturbations to the cross-attention mechanism at different timesteps, our approach induces varying degrees of semantic modification in copyrighted images, enabling it to bypass protections ranging from visible trademarks and signatures to invisible watermarks. Notably, our method is a purely gradient-based search that requires no additional training or fine-tuning. Experiments on MS-COCO and real-world copyrighted images show that diffusion models can replicate copyrighted images, underscoring the urgent need for countermeasures against neural plagiarism.
Problem

Research questions and friction points this paper is trying to address.

neural plagiarism
diffusion models
copyright infringement
image replication
watermark evasion
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural plagiarism
diffusion models
copyright evasion
inverse latents
cross-attention perturbation
πŸ”Ž Similar Papers
No similar papers found.