Generative AI and the future of scientometrics: current topics and future questions

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the profound implications of generative artificial intelligence (GenAI) for scientometrics: Can GenAI emulate scientific reasoning? Does its large-scale generation of scientific text fundamentally reshape core bibliometric features—such as authorship attribution, lexical distributions, and citation structures? Adopting a distributional linguistics framework, we develop a probabilistic generative model to systematically evaluate GenAI’s capabilities and limitations in topic labeling, citation analysis, scholar profiling, and research evaluation. Results indicate strong performance on language-generation tasks but weak generalization on tasks requiring stable semantics and domain-specific knowledge; moreover, model updates frequently compromise result reproducibility. We introduce the novel theoretical construct of “textual ecosystem perturbation,” highlighting the structural threat posed by AI-generated content to the foundational assumptions of scientometric indicators. Crucially, we advocate for a robust, cross-model and longitudinal benchmarking framework to ensure methodological integrity and metric reliability in the GenAI era.

Technology Category

Application Category

📝 Abstract
The aim of this paper is to review the use of GenAI in scientometrics, and to begin a debate on the broader implications for the field. First, we provide an introduction on GenAI's generative and probabilistic nature as rooted in distributional linguistics. And we relate this to the debate on the extent to which GenAI might be able to mimic human 'reasoning'. Second, we leverage this distinction for a critical engagement with recent experiments using GenAI in scientometrics, including topic labelling, the analysis of citation contexts, predictive applications, scholars' profiling, and research assessment. GenAI shows promise in tasks where language generation dominates, such as labelling, but faces limitations in tasks that require stable semantics, pragmatic reasoning, or structured domain knowledge. However, these results might become quickly outdated. Our recommendation is, therefore, to always strive to systematically compare the performance of different GenAI models for specific tasks. Third, we inquire whether, by generating large amounts of scientific language, GenAI might have a fundamental impact on our field by affecting textual characteristics used to measure science, such as authors, words, and references. We argue that careful empirical work and theoretical reflection will be essential to remain capable of interpreting the evolving patterns of knowledge production.
Problem

Research questions and friction points this paper is trying to address.

Reviewing GenAI's use in scientometrics and its implications
Assessing GenAI's potential in language tasks versus reasoning tasks
Examining GenAI's impact on textual characteristics in science measurement
Innovation

Methods, ideas, or system contributions that make the work stand out.

GenAI leverages distributional linguistics for scientometrics
GenAI excels in language generation tasks like labelling
GenAI requires systematic performance comparison for tasks
🔎 Similar Papers
No similar papers found.
B
Benedetto Lepori
Università della Svizzera italiana, via Buffi 13, 6904 Lugano, Switzerland
Jens Peter Andersen
Jens Peter Andersen
Senior researcher, Aarhus University
Bibliometricsresearch assessmentscientometricsmetrics validationresearch quality
K
Karsten Donnay
University of Zurich, Affolternstrasse 56, 8050 Zurich, Switzerland