GSAP-ERE: Fine-Grained Scholarly Entity and Relation Extraction Focused on Machine Learning

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of fine-grained concept and relation extraction from machine learning (ML) scholarly literature—and weak support for reproducibility—this paper introduces ML-AIE, the first high-quality academic information extraction dataset specifically designed for ML. It covers the full text of 100 peer-reviewed papers and contains 63,000 manually annotated entities (10 types) and 35,000 semantic relations (18 types). Leveraging this human-annotated corpus, fine-tuned models achieve 80.6% and 54.0% F1 scores on named entity recognition (NER) and relation extraction (RE), respectively—substantially outperforming state-of-the-art large language model prompting approaches (44.4% and 10.1%). ML-AIE fills a critical gap in benchmarking fine-grained academic knowledge extraction for ML, serving as foundational infrastructure and an evaluation standard for constructing ML knowledge graphs, monitoring research reproducibility, and developing domain-specific information extraction models.

Technology Category

Application Category

📝 Abstract
Research in Machine Learning (ML) and AI evolves rapidly. Information Extraction (IE) from scientific publications enables to identify information about research concepts and resources on a large scale and therefore is a pathway to improve understanding and reproducibility of ML-related research. To extract and connect fine-grained information in ML-related research, e.g. method training and data usage, we introduce GSAP-ERE. It is a manually curated fine-grained dataset with 10 entity types and 18 semantically categorized relation types, containing mentions of 63K entities and 35K relations from the full text of 100 ML publications. We show that our dataset enables fine-tuned models to automatically extract information relevant for downstream tasks ranging from knowledge graph (KG) construction, to monitoring the computational reproducibility of AI research at scale. Additionally, we use our dataset as a test suite to explore prompting strategies for IE using Large Language Models (LLM). We observe that the performance of state-of-the-art LLM prompting methods is largely outperformed by our best fine-tuned baseline model (NER: 80.6%, RE: 54.0% for the fine-tuned model vs. NER: 44.4%, RE: 10.1% for the LLM). This disparity of performance between supervised models and unsupervised usage of LLMs suggests datasets like GSAP-ERE are needed to advance research in the domain of scholarly information extraction.
Problem

Research questions and friction points this paper is trying to address.

Extracting fine-grained entities and relations from ML publications
Building dataset for knowledge graph construction and reproducibility monitoring
Evaluating performance gaps between fine-tuned models and LLM methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces GSAP-ERE dataset for fine-grained entity extraction
Enables fine-tuned models for knowledge graph construction tasks
Tests LLM prompting strategies against supervised baseline performance
🔎 Similar Papers
No similar papers found.
W
Wolfgang Otto
GESIS – Leibniz Institute for the Social Sciences, Cologne, Germany
L
Lu Gan
GESIS – Leibniz Institute for the Social Sciences, Cologne, Germany
Sharmila Upadhyaya
Sharmila Upadhyaya
Gesis Leibniz Institute
S
Saurav Karmakar
Individual Researcher
Stefan Dietze
Stefan Dietze
Full Professor (Heinrich-Heine-University Düsseldorf) & Scientific Director (KTS, GESIS)
Knowledge GraphsInformation RetrievalWeb ScienceNLP