🤖 AI Summary
The surge in manuscript submissions and misuse of large language models (LLMs) has led to a proliferation of low-quality peer reviews, undermining scholarly integrity.
Method: We propose the first LLM-driven, four-stage automated detection framework for identifying and fine-grained classifying defective reviews. To address class imbalance, we construct high-quality annotated and synthetic data using GPT-4.1. Our dataset—the largest to date—integrates 24,657 real reviews from OpenReview and 46,438 synthetic reviews across 6,634 papers. We jointly fine-tune an encoder with an open-source LLM while explicitly modeling textual structure and sentiment features.
Results: Hybrid training significantly improves binary classification recall (+12.3%) and F1 score (+9.7%). For the first time, we systematically quantify the rising prevalence of AI-generated review content. Our framework provides a scalable, empirically grounded technical foundation for enhancing academic integrity governance.
📝 Abstract
Peer review serves as the gatekeeper of science, yet the surge in submissions and widespread adoption of large language models (LLMs) in scholarly evaluation present unprecedented challenges. Recent work has focused on using LLMs to improve review efficiency or generate insightful review content. However, unchecked deficient reviews from both human experts and AI systems threaten to systematically undermine the peer review ecosystem and compromise academic integrity. To address this critical issue, we introduce ReviewGuard, an automated system for detecting and categorizing deficient reviews. ReviewGuard employs a comprehensive four-stage LLM-driven framework that: (1) collects ICLR and NeurIPS papers with their corresponding reviews from OpenReview; (2) annotates review types using GPT-4.1 with human validation; (3) addresses class imbalance and data scarcity through LLM-driven synthetic data augmentation, producing a final corpus of 6,634 papers, 24,657 real reviews, and 46,438 synthetic reviews; and (4) fine-tunes both encoder-based models and open source LLMs. We perform comprehensive feature analysis of the structure and quality of the review text. Compared to sufficient reviews, deficient reviews demonstrate lower rating scores, higher self-reported confidence, reduced structural complexity, and a higher proportion of negative sentiment. AI-generated text detection reveals that, since ChatGPT's emergence, AI-generated reviews have increased dramatically. In the evaluation of deficient review detection models, mixed training with synthetic and real review data provides substantial enhancements to recall and F1 scores on the binary task. This study presents the first LLM-driven system for detecting deficient peer reviews, providing evidence to inform AI governance in peer review while offering valuable insights into human-AI collaboration to maintain academic integrity.