🤖 AI Summary
Image manipulation localization suffers from a critical bottleneck: severe scarcity of high-quality pixel-level annotations. To address this, we propose CAAAv2—a novel automatic annotation paradigm—and QES, a quality evaluation metric, enabling the construction of MIMLv2, the first web-scale, pixel-annotated manipulation detection dataset comprising 243,000 images—over 120× larger than conventional datasets. Methodologically, our approach integrates constraint-based auxiliary tasks, class-aware automatic annotation, Object Jitter augmentation, and web-scale weakly supervised learning. The resulting Web-IML model achieves state-of-the-art performance across multiple real-world forgery benchmarks: it improves mean IoU by 24.1 percentage points (+31%) over TruFor. This demonstrates the effectiveness and generalizability of large-scale weakly supervised learning for fine-grained manipulation localization.
📝 Abstract
Images manipulated using image editing tools can mislead viewers and pose significant risks to social security. However, accurately localizing the manipulated regions within an image remains a challenging problem. One of the main barriers in this area is the high cost of data acquisition and the severe lack of high-quality annotated datasets. To address this challenge, we introduce novel methods that mitigate data scarcity by leveraging readily available web data. We utilize a large collection of manually forged images from the web, as well as automatically generated annotations derived from a simpler auxiliary task, constrained image manipulation localization. Specifically, we introduce a new paradigm CAAAv2, which automatically and accurately annotates manipulated regions at the pixel level. To further improve annotation quality, we propose a novel metric, QES, which filters out unreliable annotations. Through CAAA v2 and QES, we construct MIMLv2, a large-scale, diverse, and high-quality dataset containing 246,212 manually forged images with pixel-level mask annotations. This is over 120x larger than existing handcrafted datasets like IMD20. Additionally, we introduce Object Jitter, a technique that further enhances model training by generating high-quality manipulation artifacts. Building on these advances, we develop a new model, Web-IML, designed to effectively leverage web-scale supervision for the image manipulation localization task. Extensive experiments demonstrate that our approach substantially alleviates the data scarcity problem and significantly improves the performance of various models on multiple real-world forgery benchmarks. With the proposed web supervision, Web-IML achieves a striking performance gain of 31% and surpasses previous SOTA TruFor by 24.1 average IoU points. The dataset and code will be made publicly available at https://github.com/qcf-568/MIML.