Regeneration Based Training-free Attribution of Fake Images Generated by Text-to-Image Generative Models

📅 2024-03-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of attributing synthetic images generated by text-to-image models, this paper proposes a training-free, model-agnostic attribution method based on image regeneration. First, prompt inversion is applied to recover the latent text prompt from the input image; then, multiple candidate generative models are prompted with the recovered text to regenerate images; finally, the original generator is identified via CLIP-based feature similarity ranking. This approach pioneers a training-free paradigm, supporting arbitrary numbers of black-box generative models while exhibiting strong robustness against common post-processing distortions—including blurring, compression, and scaling—as well as high scalability. On standard benchmarks, it achieves state-of-the-art attribution accuracy. Moreover, it maintains stable performance under diverse adversarial post-processing attacks and can be readily integrated to enhance the attribution accuracy of existing methods.

Technology Category

Application Category

📝 Abstract
Text-to-image generative models have recently garnered significant attention due to their ability to generate images based on prompt descriptions. While these models have shown promising performance, concerns have been raised regarding the potential misuse of the generated fake images. In response to this, we have presented a simple yet effective training-free method to attribute fake images generated by text-to-image models to their source models. Given a test image to be attributed, we first inverse the textual prompt of the image, and then put the reconstructed prompt into different candidate models to regenerate candidate fake images. By calculating and ranking the similarity of the test image and the candidate images, we can determine the source of the image. This attribution allows model owners to be held accountable for any misuse of their models. Note that our approach does not limit the number of candidate text-to-image generative models. Comprehensive experiments reveal that (1) Our method can effectively attribute fake images to their source models, achieving comparable attribution performance with the state-of-the-art method; (2) Our method has high scalability ability, which is well adapted to real-world attribution scenarios. (3) The proposed method yields satisfactory robustness to common attacks, such as Gaussian blurring, JPEG compression, and Resizing. We also analyze the factors that influence the attribution performance, and explore the boost brought by the proposed method as a plug-in to improve the performance of existing SOTA. We hope our work can shed some light on the solutions to addressing the source of AI-generated images, as well as to prevent the misuse of text-to-image generative models.
Problem

Research questions and friction points this paper is trying to address.

Attributing fake images to source text-to-image models
Training-free method for model source identification
Preventing misuse of generative models via attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free attribution via prompt inversion and regeneration
Compares test image similarity with regenerated candidate images
Scalable to multiple models and robust against common attacks
🔎 Similar Papers
2024-03-28IEEE Workshop/Winter Conference on Applications of Computer VisionCitations: 1
2024-06-13Neural Information Processing SystemsCitations: 2
M
Meiling Li
Fudan University, Shanghai, China
Z
Zhenxing Qian
Fudan University, Shanghai, China
X
Xinpeng Zhang
Fudan University, Shanghai, China