🤖 AI Summary
Visual generative AI poses intellectual property (IP) infringement risks due to memorization of training data. This paper presents the first systematic evaluation of two controllable prompting strategies—Chain-of-Thought Prompting and Task Instruction Prompting—for mitigating such IP risks. Leveraging structural similarity (SSIM) as a quantitative metric for image-level resemblance between generated outputs and training samples, experiments demonstrate that both prompting methods reduce average SSIM by 37.2%, significantly attenuating model reliance on verbatim or near-verbatim reproduction of training content. The study uncovers a novel mechanistic insight: effective prompting decouples high-level semantic reasoning from low-level pixel-level reconstruction, thereby diminishing memory-driven generation. This work establishes an interpretable, deployment-ready risk-mitigation pathway for compliant generative AI applications, advancing prompt engineering as a principled tool for IP-aware model behavior control.
📝 Abstract
Visual Generative AI models have demonstrated remarkable capability in generating high-quality images from simple inputs like text prompts. However, because these models are trained on images from diverse sources, they risk memorizing and reproducing specific content, raising concerns about intellectual property (IP) infringement. Recent advances in prompt engineering offer a cost-effective way to enhance generative AI performance. In this paper, we evaluate the effectiveness of prompt engineering techniques in mitigating IP infringement risks in image generation. Our findings show that Chain of Thought Prompting and Task Instruction Prompting significantly reduce the similarity between generated images and the training data of diffusion models, thereby lowering the risk of IP infringement.