π€ AI Summary
This study addresses URL extraction from arXiv preprints for open science resource mining, systematically evaluating how multi-format representations (PDF, LaTeX, HTML, XML) affect extraction accuracy and completeness. We construct the first annotated, longitudinal arXiv dataset spanning 1992β2024, covering all major formats and manually verified URLs. We propose a heuristic, multi-source fusion method that leverages structural cues across formats and evaluate performance uniformly using F1-score. Results show that structured formats (HTML/XML) substantially improve precision (up to F1 = 0.71), while multi-format integration increases URL coverage. We further uncover, for the first time, a sharp rise in URL usage within arXiv starting in 2014βindicating growing reliance on external scholarly resources. All data, source code, and analytical pipelines are publicly released, establishing a reproducible benchmark for academic link analysis and open science infrastructure research.
π Abstract
In this work, we study how URL extraction results depend on input format. We compiled a pilot dataset by extracting URLs from 10 arXiv papers and used the same heuristic method to extract URLs from four formats derived from the PDF files or the source LaTeX files. We found that accurate and complete URL extraction from any single format or a combination of multiple formats is challenging, with the best F1-score of 0.71. Using the pilot dataset, we evaluate extraction performance across formats and show that structured formats like HTML and XML produce more accurate results than PDFs or Text. Combining multiple formats improves coverage, especially when targeting research-critical resources. We further apply URL extraction on two tasks, namely classifying URLs into open-access datasets and software and the others, and analyzing the trend of URLs usage in arXiv papers from 1992 to 2024. These results suggest that using a combination of multiple formats achieves better performance on URL extraction than a single format, and the number of URLs in arXiv papers has been steadily increasing since 1992 to 2014 and has been drastically increasing from 2014 to 2024. The dataset and the Jupyter notebooks used for the preliminary analysis are publicly available at https://github.com/lamps-lab/arxiv-urls