OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment

📅 2026-01-04
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently and objectively evaluating the novelty of scholarly submissions in peer review, particularly amidst the rapidly expanding volume of scientific literature. To this end, we propose an intelligent agent system powered by large language models that implements a four-stage pipeline—contribution extraction, semantic retrieval, hierarchical classification coupled with fine-grained full-text comparison, and evidence synthesis—to deliver an end-to-end, traceable, and evidence-based novelty assessment grounded in actual published works. This approach effectively mitigates hallucination risks inherent in large language models. Deployed on over 500 submissions to ICLR 2026, our method accurately identifies relevant prior work omitted by authors, significantly enhancing the fairness, consistency, and interpretability of peer reviews. All evaluation reports have been publicly released.

Technology Category

Application Category

📝 Abstract
Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, \textsc{OpenNovelty} grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
Problem

Research questions and friction points this paper is trying to address.

novelty assessment
peer review
scholarly literature
academic novelty
evidence-based evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered agentic system
verifiable novelty assessment
semantic search
evidence-based peer review
hierarchical taxonomy
🔎 Similar Papers
No similar papers found.
Ming Zhang
Ming Zhang
复旦大学计算机科学技术学院
LLM
K
Kexin Tan
Fudan University
Y
Yueyuan Huang
Fudan University
Y
Yujiong Shen
Fudan University
C
Chunchun Ma
WisPaper.AI
Li Ju
Li Ju
Department of Information Technology, Uppsala University
Federated LearningDistributed OptimizationUncertainty QuantificationMultimodal Language Models
Xinran Zhang
Xinran Zhang
University of Science and Technology of China
SLAMNeRF3DGS
Y
Yuhui Wang
Fudan University
W
Wenqing Jing
Fudan University
J
Jingyi Deng
Fudan University
H
Huayu Sha
Fudan University
B
Binze Hu
Fudan University
J
Jingqi Tong
Fudan University
C
Changhao Jiang
Fudan University
Y
Yage Geng
WisPaper.AI
Y
Yuankai Ying
Fudan University,WisPaper.AI
Y
Yue Zhang
WisPaper.AI
Z
Zhangyue Yin
Fudan University
Zhiheng Xi
Zhiheng Xi
Fudan University
LLM ReasoningLLM-based Agents
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
T
Tao Gui
Fudan University
Qi Zhang
Qi Zhang
Fudan University
SAGINsatellite routing
X
Xuanjing Huang
Fudan University