π€ AI Summary
To address copyright infringement and artistic style appropriation risks posed by diffusion models, this paper proposes a visually lossless copyright protection method. The approach comprises three key contributions: (1) perception-sensitive map-guided instance-aware fine-tuning, enabling fine-grained stylistic perturbation; (2) difficulty-aware dynamic intensity modulation, which adaptively adjusts perturbation magnitude based on the sampleβs stylistic mimicability; and (3) a multi-scale perceptual constraint library, jointly optimizing defense robustness and image fidelity. Without introducing perceptible visual artifacts, the method achieves over 92% style imitation suppression, reduces LPIPS by 41%, and improves FID by 27%, significantly outperforming existing state-of-the-art methods.
π Abstract
Recent progress in diffusion models has profoundly enhanced the fidelity of image generation, but it has raised concerns about copyright infringements. While prior methods have introduced adversarial perturbations to prevent style imitation, most are accompanied by the degradation of artworks' visual quality. Recognizing the importance of maintaining this, we introduce a visually improved protection method while preserving its protection capability. To this end, we devise a perceptual map to highlight areas sensitive to human eyes, guided by instance-aware refinement, which refines the protection intensity accordingly. We also introduce a difficulty-aware protection by predicting how difficult the artwork is to protect and dynamically adjusting the intensity based on this. Lastly, we integrate a perceptual constraints bank to further improve the imperceptibility. Results show that our method substantially elevates the quality of the protected image without compromising on protection efficacy.