TextShield-R1: Reinforced Reasoning for Tampered Text Detection

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited accuracy and heavy reliance on manual annotations in existing methods for detecting and localizing tampered text in images. We propose the first reinforcement learning–based multimodal large language model framework, which integrates a three-stage pipeline—forensic continual pretraining, grouped relative policy optimization, and OCR correction—guided by a curriculum learning strategy that progresses from easy to hard examples. This approach substantially reduces dependence on labeled data while maintaining high interpretability. To support comprehensive evaluation, we construct TFR, a new benchmark comprising 45k multilingual samples spanning diverse tampering types. Extensive experiments demonstrate that our method achieves state-of-the-art performance across cross-style, cross-method, and cross-lingual scenarios, offering both superior accuracy and strong explainability.

Technology Category

Application Category

📝 Abstract
The growing prevalence of tampered images poses serious security threats, highlighting the urgent need for reliable detection methods. Multimodal large language models (MLLMs) demonstrate strong potential in analyzing tampered images and generating interpretations. However, they still struggle with identifying micro-level artifacts, exhibit low accuracy in localizing tampered text regions, and heavily rely on expensive annotations for forgery interpretation. To this end, we introduce TextShield-R1, the first reinforcement learning based MLLM solution for tampered text detection and reasoning. Specifically, our approach introduces Forensic Continual Pre-training, an easy-to-hard curriculum that well prepares the MLLM for tampered text detection by harnessing the large-scale cheap data from natural image forensic and OCR tasks. During fine-tuning, we perform Group Relative Policy Optimization with novel reward functions to reduce annotation dependency and improve reasoning capabilities. At inference time, we enhance localization accuracy via OCR Rectification, a method that leverages the MLLM's strong text recognition abilities to refine its predictions. Furthermore, to support rigorous evaluation, we introduce the Text Forensics Reasoning (TFR) benchmark, comprising over 45k real and tampered images across 16 languages, 10 tampering techniques, and diverse domains. Rich reasoning-style annotations are included, allowing for comprehensive assessment. Our TFR benchmark simultaneously addresses seven major limitations of existing benchmarks and enables robust evaluation under cross-style, cross-method, and cross-language conditions. Extensive experiments demonstrate that TextShield-R1 significantly advances the state of the art in interpretable tampered text detection.
Problem

Research questions and friction points this paper is trying to address.

tampered text detection
multimodal large language models
forgery localization
annotation dependency
text forensics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Multimodal Large Language Models
Tampered Text Detection
Forensic Continual Pre-training
OCR Rectification
🔎 Similar Papers
No similar papers found.