🤖 AI Summary
To address the challenges of fine-grained concept and relation extraction from machine learning (ML) scholarly literature—and weak support for reproducibility—this paper introduces ML-AIE, the first high-quality academic information extraction dataset specifically designed for ML. It covers the full text of 100 peer-reviewed papers and contains 63,000 manually annotated entities (10 types) and 35,000 semantic relations (18 types). Leveraging this human-annotated corpus, fine-tuned models achieve 80.6% and 54.0% F1 scores on named entity recognition (NER) and relation extraction (RE), respectively—substantially outperforming state-of-the-art large language model prompting approaches (44.4% and 10.1%). ML-AIE fills a critical gap in benchmarking fine-grained academic knowledge extraction for ML, serving as foundational infrastructure and an evaluation standard for constructing ML knowledge graphs, monitoring research reproducibility, and developing domain-specific information extraction models.
📝 Abstract
Research in Machine Learning (ML) and AI evolves rapidly. Information Extraction (IE) from scientific publications enables to identify information about research concepts and resources on a large scale and therefore is a pathway to improve understanding and reproducibility of ML-related research. To extract and connect fine-grained information in ML-related research, e.g. method training and data usage, we introduce GSAP-ERE. It is a manually curated fine-grained dataset with 10 entity types and 18 semantically categorized relation types, containing mentions of 63K entities and 35K relations from the full text of 100 ML publications. We show that our dataset enables fine-tuned models to automatically extract information relevant for downstream tasks ranging from knowledge graph (KG) construction, to monitoring the computational reproducibility of AI research at scale. Additionally, we use our dataset as a test suite to explore prompting strategies for IE using Large Language Models (LLM). We observe that the performance of state-of-the-art LLM prompting methods is largely outperformed by our best fine-tuned baseline model (NER: 80.6%, RE: 54.0% for the fine-tuned model vs. NER: 44.4%, RE: 10.1% for the LLM). This disparity of performance between supervised models and unsupervised usage of LLMs suggests datasets like GSAP-ERE are needed to advance research in the domain of scholarly information extraction.