SciNLP: A Domain-Specific Benchmark for Full-Text Scientific Entity and Relation Extraction in NLP

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scientific literature datasets are typically limited to localized text segments, hindering full-document-level structured knowledge extraction. To address this, we introduce NLP-FullDoc—the first benchmark for entity and relation extraction from complete scientific papers in the NLP domain—comprising 60 full-length papers, 7,072 annotated entities, and 1,826 cross-paragraph relations. It represents the first fine-grained, human-curated annotation of full-text NLP literature, enabling advanced long-text semantic understanding and high-density knowledge graph construction. Leveraging NLP-FullDoc, supervised models achieve significant improvements over baselines in both entity recognition and relation extraction. The resulting knowledge graph exhibits an average node degree of 3.2 and high semantic richness, facilitating robust trend analysis and supporting diverse downstream applications.

Technology Category

Application Category

📝 Abstract
Structured information extraction from scientific literature is crucial for capturing core concepts and emerging trends in specialized fields. While existing datasets aid model development, most focus on specific publication sections due to domain complexity and the high cost of annotating scientific texts. To address this limitation, we introduce SciNLP - a specialized benchmark for full-text entity and relation extraction in the Natural Language Processing (NLP) domain. The dataset comprises 60 manually annotated full-text NLP publications, covering 7,072 entities and 1,826 relations. Compared to existing research, SciNLP is the first dataset providing full-text annotations of entities and their relationships in the NLP domain. To validate the effectiveness of SciNLP, we conducted comparative experiments with similar datasets and evaluated the performance of state-of-the-art supervised models on this dataset. Results reveal varying extraction capabilities of existing models across academic texts of different lengths. Cross-comparisons with existing datasets show that SciNLP achieves significant performance improvements on certain baseline models. Using models trained on SciNLP, we implemented automatic construction of a fine-grained knowledge graph for the NLP domain. Our KG has an average node degree of 3.2 per entity, indicating rich semantic topological information that enhances downstream applications. The dataset is publicly available at https://github.com/AKADDC/SciNLP.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of full-text scientific entity extraction datasets
Providing annotated NLP publications for relation extraction tasks
Enabling knowledge graph construction from academic texts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Full-text entity and relation extraction benchmark
Manually annotated full-text NLP publications
Automatic construction of knowledge graph
🔎 Similar Papers
No similar papers found.