Can Multi-modal (reasoning) LLMs detect document manipulation?

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Document tampering poses severe threats to industries reliant on trustworthy documents—such as finance and government services—necessitating robust, generalizable detection methods. This paper systematically evaluates state-of-the-art multimodal large language models (MLLMs), including GPT-4o, Claude 3.5 Sonnet, and Qwen-VL, in zero-shot settings for detecting diverse fraud patterns: textual alterations, layout misalignments, and numerical inconsistencies. We introduce a fine-grained benchmark, conduct systematic prompt engineering, and perform step-by-step reasoning analysis. Our findings reveal: (1) top-tier MLLMs substantially outperform conventional OCR- or rule-based detectors, especially under out-of-distribution conditions; (2) model parameter count exhibits no strong correlation with detection accuracy—task-specific fine-tuning proves more decisive; and (3) integrating vision-language joint reasoning enhances both interpretability and cross-domain generalization. This work establishes a novel, empirically grounded paradigm for developing explainable and scalable document anti-fraud systems.

Technology Category

Application Category

📝 Abstract
Document fraud poses a significant threat to industries reliant on secure and verifiable documentation, necessitating robust detection mechanisms. This study investigates the efficacy of state-of-the-art multi-modal large language models (LLMs)-including OpenAI O1, OpenAI 4o, Gemini Flash (thinking), Deepseek Janus, Grok, Llama 3.2 and 4, Qwen 2 and 2.5 VL, Mistral Pixtral, and Claude 3.5 and 3.7 Sonnet-in detecting fraudulent documents. We benchmark these models against each other and prior work on document fraud detection techniques using a standard dataset with real transactional documents. Through prompt optimization and detailed analysis of the models' reasoning processes, we evaluate their ability to identify subtle indicators of fraud, such as tampered text, misaligned formatting, and inconsistent transactional sums. Our results reveal that top-performing multi-modal LLMs demonstrate superior zero-shot generalization, outperforming conventional methods on out-of-distribution datasets, while several vision LLMs exhibit inconsistent or subpar performance. Notably, model size and advanced reasoning capabilities show limited correlation with detection accuracy, suggesting task-specific fine-tuning is critical. This study underscores the potential of multi-modal LLMs in enhancing document fraud detection systems and provides a foundation for future research into interpretable and scalable fraud mitigation strategies.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-modal LLMs' ability to detect document fraud
Benchmarking models on identifying tampered text and formatting inconsistencies
Assessing zero-shot generalization versus traditional fraud detection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal LLMs detect document manipulation
Prompt optimization enhances fraud identification
Zero-shot generalization outperforms conventional methods
🔎 Similar Papers
No similar papers found.
Z
Zisheng Liang
Duke University
K
Kidus Zewde
Scam.ai
R
Rudra Pratap Singh
Indian Institute of Technology, Roorkee
D
Disha Patil
Indian Institute of Technology, Roorkee
Zexi Chen
Zexi Chen
Zhejiang University
RoboticsPerceptionSLAMComputer Vision
J
Jiayu Xue
University of North Carolina at Chapel Hill
Y
Yao Yao
University of Wisconsin Madison
Y
Yifei Chen
Columbia University
Q
Qinzhe Liu
S
Simiao Ren
Scam.ai