Training-free Source Attribution of AI-generated Images via Resynthesis

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of attributing AI-generated images to their source models. We propose a training-free, one-shot zero-/few-shot attribution method that leverages textual prompts to guide candidate generative models in re-synthesizing the input image; attribution is then performed by measuring feature-space similarity between the re-synthesized output and the original image. Our approach introduces the first re-synthesis-based, training-free attribution framework for generative model provenance. Complementing this, we release the first dedicated benchmark dataset specifically designed for generative image attribution. Extensive experiments demonstrate that our method significantly outperforms existing few-shot approaches under data-scarce conditions, validating the efficacy of the re-synthesis paradigm. Moreover, the new benchmark provides a more rigorous and challenging evaluation platform for generative model attribution research.

Technology Category

Application Category

📝 Abstract
Synthetic image source attribution is a challenging task, especially in data scarcity conditions requiring few-shot or zero-shot classification capabilities. We present a new training-free one-shot attribution method based on image resynthesis. A prompt describing the image under analysis is generated, then it is used to resynthesize the image with all the candidate sources. The image is attributed to the model which produced the resynthesis closest to the original image in a proper feature space. We also introduce a new dataset for synthetic image attribution consisting of face images from commercial and open-source text-to-image generators. The dataset provides a challenging attribution framework, useful for developing new attribution models and testing their capabilities on different generative architectures. The dataset structure allows to test approaches based on resynthesis and to compare them to few-shot methods. Results from state-of-the-art few-shot approaches and other baselines show that the proposed resynthesis method outperforms existing techniques when only a few samples are available for training or fine-tuning. The experiments also demonstrate that the new dataset is a challenging one and represents a valuable benchmark for developing and evaluating future few-shot and zero-shot methods.
Problem

Research questions and friction points this paper is trying to address.

Attributing AI-generated images to their source models without training
Developing one-shot classification using image resynthesis techniques
Creating challenging datasets for testing few-shot and zero-shot attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free one-shot attribution via image resynthesis
Resynthesis comparison in feature space for source identification
New dataset enables testing resynthesis versus few-shot methods
🔎 Similar Papers
No similar papers found.