LogicLens: Visual-Logical Co-Reasoning for Text-Centric Forgery Analysis

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-centered forgery analysis methods rely predominantly on coarse-grained visual features, lack fine-grained reasoning capabilities, and treat detection, localization, and explanation as disjoint tasks—overlooking their intrinsic interdependence. LogicLens addresses these limitations via a vision-logic collaborative reasoning framework that unifies the three tasks into a single joint objective. It introduces a cross-clue-aware Chain-of-Thought (CCT) for iterative vision–text verification, a cognitively aligned PR² multi-agent annotation pipeline, and the first fine-grained dataset RealText—featuring pixel-level masks and natural-language explanations. The method integrates a hierarchical multi-agent architecture (Perceiver–Reasoner–Reviewer), GRPO-based multi-task optimization, and vision-language large models. On T-IC13, it achieves a 41.4% improvement in zero-shot macro-F1; on T-SROIE, it consistently outperforms state-of-the-art multimodal foundation models. Code, models, and RealText are publicly released.

Technology Category

Application Category

📝 Abstract
Sophisticated text-centric forgeries, fueled by rapid AIGC advancements, pose a significant threat to societal security and information authenticity. Current methods for text-centric forgery analysis are often limited to coarse-grained visual analysis and lack the capacity for sophisticated reasoning. Moreover, they typically treat detection, grounding, and explanation as discrete sub-tasks, overlooking their intrinsic relationships for holistic performance enhancement. To address these challenges, we introduce LogicLens, a unified framework for Visual-Textual Co-reasoning that reformulates these objectives into a joint task. The deep reasoning of LogicLens is powered by our novel Cross-Cues-aware Chain of Thought (CCT) mechanism, which iteratively cross-validates visual cues against textual logic. To ensure robust alignment across all tasks, we further propose a weighted multi-task reward function for GRPO-based optimization. Complementing this framework, we first designed the PR$^2$ (Perceiver, Reasoner, Reviewer) pipeline, a hierarchical and iterative multi-agent system that generates high-quality, cognitively-aligned annotations. Then, we constructed RealText, a diverse dataset comprising 5,397 images with fine-grained annotations, including textual explanations, pixel-level segmentation, and authenticity labels for model training. Extensive experiments demonstrate the superiority of LogicLens across multiple benchmarks. In a zero-shot evaluation on T-IC13, it surpasses the specialized framework by 41.4% and GPT-4o by 23.4% in macro-average F1 score. Moreover, on the challenging dense-text T-SROIE dataset, it establishes a significant lead over other MLLM-based methods in mF1, CSS, and the macro-average F1. Our dataset, model, and code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Detects and explains sophisticated text-centric image forgeries
Unifies detection, grounding, and explanation into a joint reasoning task
Addresses limitations of coarse visual analysis and isolated subtasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified visual-textual co-reasoning framework for joint tasks
Cross-cues-aware chain of thought for iterative cross-validation
Weighted multi-task reward function for GRPO-based optimization
🔎 Similar Papers
No similar papers found.
F
Fanwei Zeng
Ant Group
Changtao Miao
Changtao Miao
University of Science and Technology of China
AI
J
Jing Huang
Ant Group
Z
Zhiya Tan
Nanyang Technological University
S
Shutao Gong
Ant Group
X
Xiaoming Yu
Ant Group
Y
Yang Wang
Ant Group
H
Huazhe Tan
Ant Group
W
Weibin Yao
Ant Group
Jianshu Li
Jianshu Li
National University of Singapore
computer visionMachine learningFace analysis