🤖 AI Summary
This work addresses the challenge of detecting and localizing forgeries introduced by emerging instruction-driven image editing, which existing forensic methods—primarily designed for traditional inpainting-based manipulations—struggle to handle effectively. To bridge this gap, the authors introduce LocateEdit-Bench, the first benchmark specifically tailored for forgery localization in instruction-guided editing scenarios. It comprises 231,000 images generated by four state-of-the-art editing models across three common editing types. The study also establishes a comprehensive multi-metric evaluation protocol to systematically assess current localization approaches. By providing a high-quality, open-source dataset and a rigorous evaluation framework, this work fills a critical void in the forensic analysis of instruction-based image manipulations and highlights the limitations of existing techniques, thereby laying a foundation for future research in this domain.
📝 Abstract
Recent advancements in image editing have enabled highly controllable and semantically-aware alteration of visual content, posing unprecedented challenges to manipulation localization. However, existing AI-generated forgery localization methods primarily focus on inpainting-based manipulations, making them ineffective against the latest instruction-based editing paradigms. To bridge this critical gap, we propose LocateEdit-Bench, a large-scale dataset comprising $231$K edited images, designed specifically to benchmark localization methods against instruction-driven image editing. Our dataset incorporates four cutting-edge editing models and covers three common edit types. We conduct a detailed analysis of the dataset and develop two multi-metric evaluation protocols to assess existing localization methods. Our work establishes a foundation to keep pace with the evolving landscape of image editing, thereby facilitating the development of effective methods for future forgery localization. Dataset will be open-sourced upon acceptance.