Creating Blank Canvas Against AI-enabled Image Forgery

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-generated content (AIGC) image editing poses significant challenges for forensic detection, as conventional methods relying on low-level artifacts (e.g., noise inconsistencies) fail against increasingly sophisticated, semantically coherent edits. Method: This paper proposes a perception-suppression-based tampering localization framework: it applies frequency-domain-aware adversarial perturbations to transform input images into “semantic blank canvases”—inputs rendered entirely invisible to the Segment Anything Model (SAM), thereby nullifying its prior perceptual knowledge of original content. Subsequent AIGC edits, deprived of contextual constraints, elicit salient anomalous responses in SAM. Contribution/Results: This work pioneers systematic suppression of SAM’s visual pathway—departing fundamentally from artifact-driven detection paradigms. Extensive experiments demonstrate high robustness and fine-grained localization accuracy across mainstream AIGC editing tasks (e.g., inpainting, object insertion), achieving an average 12.6% improvement in detection AUC and strong generalization across unseen editors and domains.

Technology Category

Application Category

📝 Abstract
AIGC-based image editing technology has greatly simplified the realistic-level image modification, causing serious potential risks of image forgery. This paper introduces a new approach to tampering detection using the Segment Anything Model (SAM). Instead of training SAM to identify tampered areas, we propose a novel strategy. The entire image is transformed into a blank canvas from the perspective of neural models. Any modifications to this blank canvas would be noticeable to the models. To achieve this idea, we introduce adversarial perturbations to prevent SAM from ``seeing anything'', allowing it to identify forged regions when the image is tampered with. Due to SAM's powerful perceiving capabilities, naive adversarial attacks cannot completely tame SAM. To thoroughly deceive SAM and make it blind to the image, we introduce a frequency-aware optimization strategy, which further enhances the capability of tamper localization. Extensive experimental results demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Detect image forgery by transforming images into blank canvases for neural models.
Use adversarial perturbations to blind SAM and reveal tampered regions.
Enhance tamper localization with frequency-aware optimization for better accuracy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using adversarial perturbations to blind SAM
Frequency-aware optimization enhances tamper localization
Transforming images into blank canvas for detection
🔎 Similar Papers
No similar papers found.
Q
Qi Song
Department of Computer Science, Hong Kong Baptist University
Z
Ziyuan Luo
Department of Computer Science, Hong Kong Baptist University
Renjie Wan
Renjie Wan
Department of Computer Science, Hong Kong Baptist University
Digital WatermarkingAI SecurityImage Processing