Is Your Paper Being Reviewed by an LLM? A New Benchmark Dataset and Approach for Detecting AI Text in Peer Review

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting LLM-generated peer reviews—a critical yet underexplored threat to academic integrity. We introduce the first large-scale AI detection benchmark for peer review, comprising 788,984 human-AI review pairs, and systematically demonstrate that state-of-the-art detectors suffer severe performance degradation in this domain (average accuracy <60%). To overcome this, we propose a fine-grained detection method tailored to review text: it jointly models contrastive features derived from diverse LLMs (e.g., GPT-4, Claude, Llama) and authentic human reviews, leveraging customized representation learning and discriminative classification. Our approach achieves superior performance on review-level detection, outperforming 18 leading AI detectors with an AUC exceeding 0.92. This study delivers the first reproducible, empirically validated detection pipeline specifically designed for scholarly peer review—filling a key evidentiary gap in AI-driven academic misconduct monitoring.

Technology Category

Application Category

📝 Abstract
Peer review is a critical process for ensuring the integrity of published scientific research. Confidence in this process is predicated on the assumption that experts in the relevant domain give careful consideration to the merits of manuscripts which are submitted for publication. With the recent rapid advancements in large language models (LLMs), a new risk to the peer review process is that negligent reviewers will rely on LLMs to perform the often time consuming process of reviewing a paper. However, there is a lack of existing resources for benchmarking the detectability of AI text in the domain of peer review. To address this deficiency, we introduce a comprehensive dataset containing a total of 788,984 AI-written peer reviews paired with corresponding human reviews, covering 8 years of papers submitted to each of two leading AI research conferences (ICLR and NeurIPS). We use this new resource to evaluate the ability of 18 existing AI text detection algorithms to distinguish between peer reviews written by humans and different state-of-the-art LLMs. Motivated by the shortcomings of existing methods, we propose a new detection approach which surpasses existing methods in the identification of AI written peer reviews. Our work reveals the difficulty of identifying AI-generated text at the individual peer review level, highlighting the urgent need for new tools and methods to detect this unethical use of generative AI.
Problem

Research questions and friction points this paper is trying to address.

Detect AI text in peer reviews
Benchmark AI detection algorithms
Develop new AI text detection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces comprehensive AI-written reviews dataset
Evaluates 18 AI text detection algorithms
Proposes new superior AI review detection approach
🔎 Similar Papers
2024-06-21Journal of Artificial Intelligence ResearchCitations: 6