🤖 AI Summary
This study addresses the growing shortage of peer reviewers amid the rapid expansion of artificial intelligence research, which threatens the quality and sustainability of scholarly peer review. Departing from prevailing approaches that rely on fully automated review generation, this work pioneers a human-centered framework that positions large language models (LLMs) as tools for cultivating reviewer expertise. The proposed framework comprises two complementary components: an interactive guidance system grounded in high-quality reviewing principles to support long-term skill development, and an immediate feedback mechanism designed to enhance the quality of individual reviews. By prioritizing reviewers’ professional growth and the long-term health of the academic ecosystem, this approach offers a viable pathway toward human–AI collaboration in building a more robust and sustainable peer review system.
📝 Abstract
The rapid expansion of AI research has intensified the Reviewer Gap, threatening the peer-review sustainability and perpetuating a cycle of low-quality evaluations. This position paper critiques existing LLM approaches that automatically generate reviews and argues for a paradigm shift that positions LLMs as tools for assisting and educating human reviewers. We define the core principles of high-quality peer review and propose two complementary systems grounded in these foundations: (i) an LLM-assisted mentoring system that cultivates reviewers'long-term competencies, and (ii) an LLM-assisted feedback system that helps reviewers refine the quality of their reviews. This human-centered approach aims to strengthen reviewer expertise and contribute to building a more sustainable scholarly ecosystem.