🤖 AI Summary
This study addresses the challenge of ensuring review quality under constrained peer-review resources by automating the assessment of review utility for authors. We first propose a systematic, four-dimensional definition of utility—actionability, evidentiary support & specificity, verifiability, and helpfulness—and construct RevUtil, a large-scale benchmark comprising human-annotated and controllably synthesized review–response pairs. We design an evaluation framework supporting multi-dimensional scoring and rationale generation. Leveraging fine-grained annotations, we adapt open-source language models, achieving human-level or better inter-annotator agreement (vs. GPT-4o) across all dimensions. Empirical analysis reveals that current AI-generated reviews remain substantially less useful than human-written ones. Nevertheless, this work establishes the first reproducible benchmark and methodological foundation for automated utility evaluation of peer reviews.
📝 Abstract
Providing constructive feedback to paper authors is a core component of peer review. With reviewers increasingly having less time to perform reviews, automated support systems are required to ensure high reviewing quality, thus making the feedback in reviews useful for authors. To this end, we identify four key aspects of review comments (individual points in weakness sections of reviews) that drive the utility for authors: Actionability, Grounding & Specificity, Verifiability, and Helpfulness. To enable evaluation and development of models assessing review comments, we introduce the RevUtil dataset. We collect 1,430 human-labeled review comments and scale our data with 10k synthetically labeled comments for training purposes. The synthetic data additionally contains rationales, i.e., explanations for the aspect score of a review comment. Employing the RevUtil dataset, we benchmark fine-tuned models for assessing review comments on these aspects and generating rationales. Our experiments demonstrate that these fine-tuned models achieve agreement levels with humans comparable to, and in some cases exceeding, those of powerful closed models like GPT-4o. Our analysis further reveals that machine-generated reviews generally underperform human reviews on our four aspects.