🤖 AI Summary
This work addresses the challenge of tracing AI-generated images to their source models, a task where existing methods often fail to generalize beyond known generators due to their reliance on model-specific features. To overcome this limitation, the study introduces LIDA, a model-agnostic framework that formulates image provenance as an instance retrieval problem. Leveraging low-bit fingerprint encoding, unsupervised pretraining, and few-shot adaptation, LIDA enables efficient cross-model attribution without requiring access to internal generator details. The approach achieves state-of-the-art performance in both deepfake detection and image provenance tasks, significantly outperforming prior methods under zero-shot and few-shot settings.
📝 Abstract
With the rapid advancement of AIGC technologies, image forensics will encounter unprecedented challenges. Traditional methods are incapable of dealing with increasingly realistic images generated by rapidly evolving image generation techniques. To facilitate the identification of AI-generated images and the attribution of their source models, generative image watermarking and AI-generated image attribution have emerged as key research focuses in recent years. However, existing methods are model-dependent, requiring access to the generative models and lacking generality and scalability to new and unseen generators. To address these limitations, this work presents a new paradigm for AI-generated image attribution by formulating it as an instance retrieval problem instead of a conventional image classification problem. We propose an efficient model-agnostic framework, called Low-bIt-plane-based Deepfake Attribution (LIDA). The input to LIDA is produced by Low-Bit Fingerprint Generation module, while the training involves Unsupervised Pre-Training followed by subsequent Few-Shot Attribution Adaptation. Comprehensive experiments demonstrate that LIDA achieves state-of-the-art performance for both Deepfake detection and image attribution under zero- and few-shot settings. The code is at https://github.com/hongsong-wang/LIDA