ReviewGuard: Enhancing Deficient Peer Review Detection via LLM-Driven Data Augmentation

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The surge in manuscript submissions and misuse of large language models (LLMs) has led to a proliferation of low-quality peer reviews, undermining scholarly integrity. Method: We propose the first LLM-driven, four-stage automated detection framework for identifying and fine-grained classifying defective reviews. To address class imbalance, we construct high-quality annotated and synthetic data using GPT-4.1. Our dataset—the largest to date—integrates 24,657 real reviews from OpenReview and 46,438 synthetic reviews across 6,634 papers. We jointly fine-tune an encoder with an open-source LLM while explicitly modeling textual structure and sentiment features. Results: Hybrid training significantly improves binary classification recall (+12.3%) and F1 score (+9.7%). For the first time, we systematically quantify the rising prevalence of AI-generated review content. Our framework provides a scalable, empirically grounded technical foundation for enhancing academic integrity governance.

Technology Category

Application Category

📝 Abstract
Peer review serves as the gatekeeper of science, yet the surge in submissions and widespread adoption of large language models (LLMs) in scholarly evaluation present unprecedented challenges. Recent work has focused on using LLMs to improve review efficiency or generate insightful review content. However, unchecked deficient reviews from both human experts and AI systems threaten to systematically undermine the peer review ecosystem and compromise academic integrity. To address this critical issue, we introduce ReviewGuard, an automated system for detecting and categorizing deficient reviews. ReviewGuard employs a comprehensive four-stage LLM-driven framework that: (1) collects ICLR and NeurIPS papers with their corresponding reviews from OpenReview; (2) annotates review types using GPT-4.1 with human validation; (3) addresses class imbalance and data scarcity through LLM-driven synthetic data augmentation, producing a final corpus of 6,634 papers, 24,657 real reviews, and 46,438 synthetic reviews; and (4) fine-tunes both encoder-based models and open source LLMs. We perform comprehensive feature analysis of the structure and quality of the review text. Compared to sufficient reviews, deficient reviews demonstrate lower rating scores, higher self-reported confidence, reduced structural complexity, and a higher proportion of negative sentiment. AI-generated text detection reveals that, since ChatGPT's emergence, AI-generated reviews have increased dramatically. In the evaluation of deficient review detection models, mixed training with synthetic and real review data provides substantial enhancements to recall and F1 scores on the binary task. This study presents the first LLM-driven system for detecting deficient peer reviews, providing evidence to inform AI governance in peer review while offering valuable insights into human-AI collaboration to maintain academic integrity.
Problem

Research questions and friction points this paper is trying to address.

Detecting deficient peer reviews from both humans and AI systems
Addressing class imbalance and data scarcity in review datasets
Developing automated system to maintain academic review integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven synthetic data augmentation for class imbalance
Four-stage framework combining real and synthetic reviews
Fine-tuning encoder-based models and open source LLMs
🔎 Similar Papers
H
Haoxuan Zhang
University of North Texas, Denton, TX, USA
Ruochi Li
Ruochi Li
North Carolina State University
Computer Science
S
Sarthak Shrestha
University of North Texas, Denton, TX, USA
S
Shree Harshini Mamidala
University of North Texas, Denton, TX, USA
R
Revanth Putta
University of North Texas, Denton, TX, USA
A
Arka Krishan Aggarwal
University of North Texas, Denton, TX, USA
T
Ting Xiao
University of North Texas, Denton, TX, USA
J
Junhua Ding
University of North Texas, Denton, TX, USA
H
Haihua Chen
University of North Texas, Denton, TX, USA