🤖 AI Summary
This work addresses the surge of forged content generated by prompt-driven AI image editing, a challenge exacerbated by the scarcity of training data and limited model specificity in existing forgery localization methods. To tackle this, the authors propose ICL-Net, a novel architecture integrating a three-stream backbone with intra-image contrastive learning to accurately localize edited regions. They also introduce PromptForge-350k, the first large-scale dataset dedicated to prompt-based image editing forgery localization, featuring fully automated mask annotations. Experimental results demonstrate that ICL-Net achieves an IoU of 62.5% on PromptForge-350k, outperforming the current state-of-the-art by 5.1%, exhibits robustness to image degradation (IoU drop <1%), and generalizes effectively to unseen editing models with an average IoU of 41.5%.
📝 Abstract
The rapid democratization of prompt-based AI image editing has recently exacerbated the risks associated with malicious content fabrication and misinformation. However, forgery localization methods targeting these emerging editing techniques remain significantly under-explored. To bridge this gap, we first introduce a fully automated mask annotating framework that leverages keypoint alignment and semantic space similarity to generate precise ground-truth masks for edited regions. Based on this framework, we construct PromptForge-350k, a large-scale forgery localization dataset covering four state-of-the-art prompt-based AI image editing models, thereby mitigating the data scarcity in this domain. Furthermore, we propose ICL-Net, an effective forgery localization network featuring a triple-stream backbone and intra-image contrastive learning. This design enables the model to capture highly robust and generalizable forensic features. Extensive experiments demonstrate that our method achieves an IoU of 62.5% on PromptForge-350k, outperforming SOTA methods by 5.1%. Additionally, it exhibits strong robustness against common degradations with an IoU drop of less than 1%, and shows promising generalization capabilities on unseen editing models, achieving an average IoU of 41.5%.