🤖 AI Summary
Document tampering poses severe threats to industries reliant on trustworthy documents—such as finance and government services—necessitating robust, generalizable detection methods. This paper systematically evaluates state-of-the-art multimodal large language models (MLLMs), including GPT-4o, Claude 3.5 Sonnet, and Qwen-VL, in zero-shot settings for detecting diverse fraud patterns: textual alterations, layout misalignments, and numerical inconsistencies. We introduce a fine-grained benchmark, conduct systematic prompt engineering, and perform step-by-step reasoning analysis. Our findings reveal: (1) top-tier MLLMs substantially outperform conventional OCR- or rule-based detectors, especially under out-of-distribution conditions; (2) model parameter count exhibits no strong correlation with detection accuracy—task-specific fine-tuning proves more decisive; and (3) integrating vision-language joint reasoning enhances both interpretability and cross-domain generalization. This work establishes a novel, empirically grounded paradigm for developing explainable and scalable document anti-fraud systems.
📝 Abstract
Document fraud poses a significant threat to industries reliant on secure and verifiable documentation, necessitating robust detection mechanisms. This study investigates the efficacy of state-of-the-art multi-modal large language models (LLMs)-including OpenAI O1, OpenAI 4o, Gemini Flash (thinking), Deepseek Janus, Grok, Llama 3.2 and 4, Qwen 2 and 2.5 VL, Mistral Pixtral, and Claude 3.5 and 3.7 Sonnet-in detecting fraudulent documents. We benchmark these models against each other and prior work on document fraud detection techniques using a standard dataset with real transactional documents. Through prompt optimization and detailed analysis of the models' reasoning processes, we evaluate their ability to identify subtle indicators of fraud, such as tampered text, misaligned formatting, and inconsistent transactional sums. Our results reveal that top-performing multi-modal LLMs demonstrate superior zero-shot generalization, outperforming conventional methods on out-of-distribution datasets, while several vision LLMs exhibit inconsistent or subpar performance. Notably, model size and advanced reasoning capabilities show limited correlation with detection accuracy, suggesting task-specific fine-tuning is critical. This study underscores the potential of multi-modal LLMs in enhancing document fraud detection systems and provides a foundation for future research into interpretable and scalable fraud mitigation strategies.