REM-CTX: Automated Peer Review via Reinforcement Learning with Auxiliary Context

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common omission of visual content—such as figures and tables—and external scholarly signals in existing automated peer review systems. The authors propose a reinforcement learning–based approach that, for the first time, explicitly models the correspondence between textual reviews, visual elements, and external academic signals. They introduce two complementary correspondence-based reward mechanisms and train an 8-billion-parameter language model using multi-dimensional quality metrics. Employing Group Relative Policy Optimization (GRPO), experiments across computer science, biology, and physical sciences demonstrate that the proposed method significantly outperforms six baseline systems—including larger commercial models—in both review quality and contextual consistency. The study also uncovers a negative correlation between critique-related dimensions and other quality metrics during training.
📝 Abstract
Most automated peer review systems rely on textual manuscript content alone, leaving visual elements such as figures and external scholarly signals underutilized. We introduce REM-CTX, a reinforcement-learning system that incorporates auxiliary context into the review generation process via correspondence-aware reward functions. REM-CTX trains an 8B-parameter language model with Group Relative Policy Optimization (GRPO) and combines a multi-aspect quality reward with two correspondence rewards that explicitly encourage alignment with auxiliary context. Experiments on manuscripts across Computer, Biological, and Physical Sciences show that REM-CTX achieves the highest overall review quality among six baselines, outperforming other systems with substantially larger commercial models, and surpassing the next-best RL baseline across both quality and contextual grounding metrics. Ablation studies confirm that the two correspondence rewards are complementary: each selectively improves its targeted correspondence reward while preserving all quality dimensions, and the full model outperforms all partial variants. Analysis of training dynamics reveals that the criticism aspect is negatively correlated with other metrics during training, suggesting that future studies should group multi-dimension rewards for review generation.
Problem

Research questions and friction points this paper is trying to address.

automated peer review
auxiliary context
reinforcement learning
contextual grounding
review generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
automated peer review
auxiliary context
correspondence reward
GRPO
🔎 Similar Papers
No similar papers found.