🤖 AI Summary
In social media and generative AI applications, copyright infringement and source image replication—such as training data leakage—pose critical risks. To address this, we propose a two-step structural evasion framework tailored for diffusion models: first, generating semantic segmentation masks to explicitly encode image structure; second, applying structure-aware reverse denoising guidance to actively steer the generation process away from structural patterns present in the training data. Our method requires neither model retraining nor complex prompt engineering, enabling low-cost, endogenous copyright risk mitigation for the first time. Experiments demonstrate that, while maintaining high-fidelity synthesis (FID < 15), our approach significantly reduces structural similarity—measured by SSIM—between generated images and the training set, effectively preventing source replication without compromising visual quality.
📝 Abstract
In today's age of social media and marketing, copyright issues can be a major roadblock to the free sharing of images. Generative AI models have made it possible to create high-quality images, but concerns about copyright infringement are a hindrance to their abundant use. As these models use data from training images to generate new ones, it is often a daunting task to ensure they do not violate intellectual property rights. Some AI models have even been noted to directly copy copyrighted images, a problem often referred to as source copying. Traditional copyright protection measures such as watermarks and metadata have also proven to be futile in this regard. To address this issue, we propose a novel two-step image generation model inspired by the conditional diffusion model. The first step involves creating an image segmentation mask for some prompt-based generated images. This mask embodies the shape of the image. Thereafter, the diffusion model is asked to generate the image anew while avoiding the shape in question. This approach shows a decrease in structural similarity from the training image, i.e. we are able to avoid the source copying problem using this approach without expensive retraining of the model or user-centered prompt generation techniques. This makes our approach the most computationally inexpensive approach to avoiding both copyright infringement and source copying for diffusion model-based image generation.