From Isolated Scoring to Collaborative Ranking: A Comparison-Native Framework for LLM-Based Paper Evaluation

๐Ÿ“… 2026-03-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of current large language models in peer review, which rely on isolated absolute scoring and struggle to generalize across conferences, time periods, and evolving review criteria, often learning narrow scoring heuristics. To overcome this, the authors propose the Comparison-Native Paper Evaluation (CNPE) frameworkโ€”the first approach to integrate comparative learning throughout data construction, training, and inference. CNPE employs graph-based sampling to select highly discriminative paper pairs and combines supervised fine-tuning with comparison-based reward modeling in reinforcement learning, thereby shifting the paradigm from absolute scoring to global relative ranking. Evaluated across multiple datasets, CNPE achieves an average relative performance gain of 21.8% over the DeepReview-14B baseline and demonstrates strong generalization on five unseen datasets.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are currently applied to scientific paper evaluation by assigning an absolute score to each paper independently. However, since score scales vary across conferences, time periods, and evaluation criteria, models trained on absolute scores are prone to fitting narrow, context-specific rules rather than developing robust scholarly judgment. To overcome this limitation, we propose shifting paper evaluation from isolated scoring to collaborative ranking. In particular, we design \textbf{C}omparison-\textbf{N}ative framework for \textbf{P}aper \textbf{E}valuation (\textbf{CNPE}), integrating comparison into both data construction and model learning. We first propose a graph-based similarity ranking algorithm to facilitate the sampling of more informative and discriminative paper pairs from a collection. We then enhance relative quality judgment through supervised fine-tuning and reinforcement learning with comparison-based rewards. At inference, the model performs pairwise comparisons over sampled paper pairs and aggregates these preference signals into a global relative quality ranking. Experimental results demonstrate that our framework achieves an average relative improvement of \textbf{21.8\%} over the strong baseline DeepReview-14B, while exhibiting robust generalization to five previously unseen datasets. \href{https://github.com/ECNU-Text-Computing/ComparisonReview}{Code}.
Problem

Research questions and friction points this paper is trying to address.

LLM-based paper evaluation
absolute scoring
score scale inconsistency
scholarly judgment
scientific paper evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

comparison-native framework
collaborative ranking
pairwise comparison
relative quality judgment
graph-based sampling
๐Ÿ”Ž Similar Papers
No similar papers found.
P
Pujun Zheng
School of Economics and Management, East China Normal University
Jiacheng Yao
Jiacheng Yao
Southeast University
Wireless communicationDistributed learning in wireless networks
J
Jinquan Zheng
School of Economics and Management, East China Normal University
Chenyang Gu
Chenyang Gu
Undergraduate, Peking University
Embodied AIRobotic Manipulation
G
Guoxiu He
School of Economics and Management, East China Normal University
Jiawei Liu
Jiawei Liu
Wuhan University
Information RetrievalContent SecurityDocument Intelligence
Y
Yong Huang
School of Information Management, Wuhan University
T
Tianrui Guo
China Academic Degrees & Graduate Education Development Center
Wei Lu
Wei Lu
Professor of Information Management, Wuhan University
Information RetrievalGraph theorySocial networksWeb metrics