GraphMind: Interactive Novelty Assessment System for Accelerating Scientific Discovery

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing novelty assessment methods for scientific papers suffer from unreliability and opacity due to incomplete domain knowledge. Method: This paper proposes a traceable, LLM-driven interactive evaluation system that integrates academic APIs (e.g., arXiv, Semantic Scholar) to enable structured literature parsing, automated key-element annotation, multi-perspective related-work retrieval, and end-to-end analysis via relation extraction and interactive visualization. Crucially, it introduces a “traceable contextual insight” mechanism that explicitly grounds LLM-generated judgments in supporting evidence. Contribution/Results: The system is open-sourced with a graphical user interface; empirical evaluation demonstrates significant improvements in researchers’ efficiency in comprehending domain frontiers and in the credibility of novelty assessments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) show strong reasoning and text generation capabilities, prompting their use in scientific literature analysis, including novelty assessment. While evaluating novelty of scientific papers is crucial for peer review, it requires extensive knowledge of related work, something not all reviewers have. While recent work on LLM-assisted scientific literature analysis supports literature comparison, existing approaches offer limited transparency and lack mechanisms for result traceability via an information retrieval module. To address this gap, we introduce $ extbf{GraphMind}$, an easy-to-use interactive web tool designed to assist users in evaluating the novelty of scientific papers or drafted ideas. Specially, $ extbf{GraphMind}$ enables users to capture the main structure of a scientific paper, explore related ideas through various perspectives, and assess novelty via providing verifiable contextual insights. $ extbf{GraphMind}$ enables users to annotate key elements of a paper, explore related papers through various relationships, and assess novelty with contextual insight. This tool integrates external APIs such as arXiv and Semantic Scholar with LLMs to support annotation, extraction, retrieval and classification of papers. This combination provides users with a rich, structured view of a scientific idea's core contributions and its connections to existing work. $ extbf{GraphMind}$ is available at https://oyarsa.github.io/graphmind and a demonstration video at https://youtu.be/wKbjQpSvwJg. The source code is available at https://github.com/oyarsa/graphmind.
Problem

Research questions and friction points this paper is trying to address.

Assessing scientific paper novelty with limited reviewer knowledge
Providing transparent traceable results for literature analysis
Enabling interactive exploration of related work connections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with external APIs for paper analysis
Enables interactive novelty assessment with contextual insights
Provides structured exploration of scientific paper relationships
🔎 Similar Papers
No similar papers found.