🤖 AI Summary
Large-scale evaluation of author name disambiguation quality in biomedical literature remains challenging due to the absence of scalable, ground-truth–free assessment methods.
Method: We propose an unsupervised quality proxy—“abnormal authorship position upon first independent PI appointment” (e.g., appearing as last author in the inaugural year)—and analyze temporal career trajectories of 5.8 million researchers from OpenAlex.
Contribution/Results: Over 60% exhibit this anomaly, strongly correlating with disambiguation errors. Statistical analysis reveals that missing institutional affiliations significantly increase anomaly rates, whereas ORCID presence substantially reduces them. Notably, pre-2010 female authors show systematically elevated anomaly rates, suggesting gender-biased disambiguation may have compromised earlier demographic conclusions. This work establishes, for the first time, a robust association between early-career authorship patterns and disambiguation fidelity, introducing a scalable, annotation-free paradigm for evaluating large-scale author identification systems.
📝 Abstract
Authorship disambiguation is crucial for advancing studies in science of science. However, assessing the quality of authorship disambiguation in large-scale databases remains challenging since it is difficult to manually curate a gold-standard dataset that contains disambiguated authors. Through estimating the timing of when 5.8 million biomedical researchers became independent Principal Investigators (PIs) with authorship metadata extracted from the OpenAlex -- the largest open-source bibliometric database -- we unexpectedly discovered an anomaly: over 60% of researchers appeared as the last authors in their first career year. We hypothesized that this improbable finding results from poor name disambiguation, suggesting that such an anomaly may serve as an indicator of low-quality authorship disambiguation. Our findings indicated that authors who lack affiliation information, which makes it more difficult to disambiguate, were far more likely to exhibit this anomaly compared to those who included their affiliation information. In contrast, authors with Open Researcher and Contributor ID (ORCID) -- expected to have higher quality disambiguation -- showed significantly lower anomaly rates. We further applied this approach to examine the authorship disambiguation quality by gender over time, and we found that the quality of disambiguation for female authors was lower than that for male authors before 2010, suggesting that gender disparity findings based on pre-2010 data may require careful reexamination. Our results provide a framework for systematically evaluating authorship disambiguation quality in various contexts, facilitating future improvements in efforts to authorship disambiguation.