🤖 AI Summary
Existing text-centered forgery analysis methods rely predominantly on coarse-grained visual features, lack fine-grained reasoning capabilities, and treat detection, localization, and explanation as disjoint tasks—overlooking their intrinsic interdependence. LogicLens addresses these limitations via a vision-logic collaborative reasoning framework that unifies the three tasks into a single joint objective. It introduces a cross-clue-aware Chain-of-Thought (CCT) for iterative vision–text verification, a cognitively aligned PR² multi-agent annotation pipeline, and the first fine-grained dataset RealText—featuring pixel-level masks and natural-language explanations. The method integrates a hierarchical multi-agent architecture (Perceiver–Reasoner–Reviewer), GRPO-based multi-task optimization, and vision-language large models. On T-IC13, it achieves a 41.4% improvement in zero-shot macro-F1; on T-SROIE, it consistently outperforms state-of-the-art multimodal foundation models. Code, models, and RealText are publicly released.
📝 Abstract
Sophisticated text-centric forgeries, fueled by rapid AIGC advancements, pose a significant threat to societal security and information authenticity. Current methods for text-centric forgery analysis are often limited to coarse-grained visual analysis and lack the capacity for sophisticated reasoning. Moreover, they typically treat detection, grounding, and explanation as discrete sub-tasks, overlooking their intrinsic relationships for holistic performance enhancement. To address these challenges, we introduce LogicLens, a unified framework for Visual-Textual Co-reasoning that reformulates these objectives into a joint task. The deep reasoning of LogicLens is powered by our novel Cross-Cues-aware Chain of Thought (CCT) mechanism, which iteratively cross-validates visual cues against textual logic. To ensure robust alignment across all tasks, we further propose a weighted multi-task reward function for GRPO-based optimization. Complementing this framework, we first designed the PR$^2$ (Perceiver, Reasoner, Reviewer) pipeline, a hierarchical and iterative multi-agent system that generates high-quality, cognitively-aligned annotations. Then, we constructed RealText, a diverse dataset comprising 5,397 images with fine-grained annotations, including textual explanations, pixel-level segmentation, and authenticity labels for model training. Extensive experiments demonstrate the superiority of LogicLens across multiple benchmarks. In a zero-shot evaluation on T-IC13, it surpasses the specialized framework by 41.4% and GPT-4o by 23.4% in macro-average F1 score. Moreover, on the challenging dense-text T-SROIE dataset, it establishes a significant lead over other MLLM-based methods in mF1, CSS, and the macro-average F1. Our dataset, model, and code will be made publicly available.