🤖 AI Summary
This study investigates the profound implications of generative artificial intelligence (GenAI) for scientometrics: Can GenAI emulate scientific reasoning? Does its large-scale generation of scientific text fundamentally reshape core bibliometric features—such as authorship attribution, lexical distributions, and citation structures? Adopting a distributional linguistics framework, we develop a probabilistic generative model to systematically evaluate GenAI’s capabilities and limitations in topic labeling, citation analysis, scholar profiling, and research evaluation. Results indicate strong performance on language-generation tasks but weak generalization on tasks requiring stable semantics and domain-specific knowledge; moreover, model updates frequently compromise result reproducibility. We introduce the novel theoretical construct of “textual ecosystem perturbation,” highlighting the structural threat posed by AI-generated content to the foundational assumptions of scientometric indicators. Crucially, we advocate for a robust, cross-model and longitudinal benchmarking framework to ensure methodological integrity and metric reliability in the GenAI era.
📝 Abstract
The aim of this paper is to review the use of GenAI in scientometrics, and to begin a debate on the broader implications for the field. First, we provide an introduction on GenAI's generative and probabilistic nature as rooted in distributional linguistics. And we relate this to the debate on the extent to which GenAI might be able to mimic human 'reasoning'. Second, we leverage this distinction for a critical engagement with recent experiments using GenAI in scientometrics, including topic labelling, the analysis of citation contexts, predictive applications, scholars' profiling, and research assessment. GenAI shows promise in tasks where language generation dominates, such as labelling, but faces limitations in tasks that require stable semantics, pragmatic reasoning, or structured domain knowledge. However, these results might become quickly outdated. Our recommendation is, therefore, to always strive to systematically compare the performance of different GenAI models for specific tasks. Third, we inquire whether, by generating large amounts of scientific language, GenAI might have a fundamental impact on our field by affecting textual characteristics used to measure science, such as authors, words, and references. We argue that careful empirical work and theoretical reflection will be essential to remain capable of interpreting the evolving patterns of knowledge production.