🤖 AI Summary
Existing image manipulation detection and localization (IMDL) methods exhibit overestimated generalization under cross-dataset evaluation and fail to robustly detect diverse AIGC-generated forgeries encountered in real-world scenarios.
Method: We introduce the first diagnostic IMDL benchmark, featuring a novel four-dimensional taxonomy—editing model, manipulation type, semantic content, and forgery granularity—and five cross-dimensional evaluation protocols. Built upon a large-scale, multi-source, structurally annotated AIGC manipulation dataset, it establishes a rigorously controlled cross-evaluation framework.
Contribution/Results: Evaluating 11 state-of-the-art models reveals severe robustness degradation: average detection F1 drops by 32.7% and localization IoU by 41.5%. This benchmark dispels performance illusions, providing a reproducible, attributable, and mechanistically insightful evaluation paradigm for advancing IMDL generalization research.
📝 Abstract
The accessibility surge and abuse risks of user-friendly image editing models have created an urgent need for generalizable, up-to-date methods for Image Manipulation Detection and Localization (IMDL). Current IMDL research typically uses cross-dataset evaluation, where models trained on one benchmark are tested on others. However, this simplified evaluation approach conceals the fragility of existing methods when handling diverse AI-generated content, leading to misleading impressions of progress. This paper challenges this illusion by proposing NeXT-IMDL, a large-scale diagnostic benchmark designed not just to collect data, but to probe the generalization boundaries of current detectors systematically. Specifically, NeXT-IMDL categorizes AIGC-based manipulations along four fundamental axes: editing models, manipulation types, content semantics, and forgery granularity. Built upon this, NeXT-IMDL implements five rigorous cross-dimension evaluation protocols. Our extensive experiments on 11 representative models reveal a critical insight: while these models perform well in their original settings, they exhibit systemic failures and significant performance degradation when evaluated under our designed protocols that simulate real-world, various generalization scenarios. By providing this diagnostic toolkit and the new findings, we aim to advance the development towards building truly robust, next-generation IMDL models.