🤖 AI Summary
This work systematically evaluates adversarial perturbation-based style protection tools designed for artists, revealing their rapid failure under realistic attack scenarios—including image upscaling and fine-tuning of style transfer models—and inability to prevent high-fidelity generative AI imitation of artistic styles.
Method: We construct a robustness evaluation framework covering mainstream protection tools with millions of downloads, integrating automated adversarial testing, human-subject studies, and quantitative style reproduction analysis.
Contribution/Results: We provide the first empirical evidence that existing tools offer only illusory security. Introducing a novel bypass paradigm grounded in low-overhead, realistic assumptions, we successfully break all tested tools. Human-subject experiments confirm that protected artworks remain accurately identifiable and stylistically reproducible at high fidelity—fundamentally challenging the efficacy and practicality of current adversarial perturbation strategies for artistic style protection.
📝 Abstract
Artists are increasingly concerned about advancements in image generation models that can closely replicate their unique artistic styles. In response, several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online. In this work, we evaluate the effectiveness of popular protections -- with millions of downloads -- and show they only provide a false sense of security. We find that low-effort and"off-the-shelf"techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, we demonstrate that all existing protections can be easily bypassed, leaving artists vulnerable to style mimicry. We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI, and urge the development of alternative non-technological solutions.