🤖 AI Summary
This work addresses the critical threat posed by generative AI–enabled photorealistic text-image forgeries to document security, a challenge inadequately tackled by existing methods that rely on isolated visual cues and lack interpretable joint reasoning. The authors propose DocShield, a unified framework that formulates document forgery analysis as a vision–logic collaborative reasoning task. It introduces a Cross-Cues-aware Chain of Thought mechanism, which iteratively cross-validates visual anomalies against textual semantics to jointly achieve detection, localization, and explanation. The study also contributes RealText-V1, a multilingual dataset annotated with pixel-level masks and expert rationales, and employs GRPO optimization with a weighted multi-task reward strategy. Evaluated on T-IC13, DocShield improves macro-average F1 by 41.4% over specialized models and by 23.4% over GPT-4o, while demonstrating robust performance on T-SROIE.
📝 Abstract
The rapid progress of generative AI has enabled increasingly realistic text-centric image forgeries, posing major challenges to document safety. Existing forensic methods mainly rely on visual cues and lack evidence-based reasoning to reveal subtle text manipulations. Detection, localization, and explanation are often treated as isolated tasks, limiting reliability and interpretability. To tackle these challenges, we propose DocShield, the first unified framework formulating text-centric forgery analysis as a visual-logical co-reasoning problem. At its core, a novel Cross-Cues-aware Chain of Thought (CCT) mechanism enables implicit agentic reasoning, iteratively cross-validating visual anomalies with textual semantics to produce consistent, evidence-grounded forensic analysis. We further introduce a Weighted Multi-Task Reward for GRPO-based optimization, aligning reasoning structure, spatial evidence, and authenticity prediction. Complementing the framework, we construct RealText-V1, a multilingual dataset of document-like text images with pixel-level manipulation masks and expert-level textual explanations. Extensive experiments show DocShield significantly outperforms existing methods, improving macro-average F1 by 41.4% over specialized frameworks and 23.4% over GPT-4o on T-IC13, with consistent gains on the challenging T-SROIE benchmark. Our dataset, model, and code will be publicly released.