🤖 AI Summary
Existing benchmarks inadequately assess the ability of multimodal large language models (MLLMs) to reason about image-based fraud in authentic academic contexts. To address this gap, this work introduces a multitask benchmark comprising over 4,000 questions, integrating real retracted papers and synthetically generated data across seven academic scenarios, five categories of scientific fraud, and 16 fine-grained image manipulation techniques. Crucially, it establishes the first mapping between fraud types and five core reasoning capabilities, enabling multidimensional and fine-grained evaluation of MLLMs’ visual fraud reasoning. Experiments on 16 leading models reveal significant limitations: even the top-performing GPT-5 achieves only 56.15% accuracy, underscoring the substantial challenges current models face in handling such complex, real-world tasks.
📝 Abstract
We present THEMIS, a novel multi-task benchmark designed to comprehensively evaluate multimodal large language models (MLLMs) on visual fraud reasoning within real-world academic scenarios. Compared to existing benchmarks, THEMIS introduces three major advances. (1) Real-World Scenarios and Complexity: Our benchmark comprises over 4,000 questions spanning seven scenarios, derived from authentic retracted-paper cases and carefully curated multimodal synthetic data. With 60.47% complex-texture images, THEMIS bridges the critical gap between existing benchmarks and the complexity of real-world academic fraud. (2) Fraud-Type Diversity and Granularity: THEMIS systematically covers five challenging fraud types and introduces 16 fine-grained manipulation operations. On average, each sample undergoes multiple stacked manipulation operations, with the diversity and difficulty of these manipulations demanding a high level of visual fraud reasoning from the models. (3) Multi-Dimensional Capability Evaluation: We establish a mapping from fraud types to five core visual fraud reasoning capabilities, thereby enabling an evaluation that reveals the distinct strengths and specific weaknesses of different models across these core capabilities. Experiments on 16 leading MLLMs show that even the best-performing model, GPT-5, achieves an overall performance of only 56.15%, demonstrating that our benchmark presents a stringent test. We expect THEMIS to advance the development of MLLMs for complex, real-world fraud reasoning tasks.