Does GenAI Rewrite How We Write? An Empirical Study on Two-Million Preprints

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how generative AI is reshaping the scientific publishing ecosystem, analyzing 2.1 million preprints across disciplines. Methodologically, it employs a multi-level framework integrating interrupted time-series modeling, collaboration and productivity metrics, linguistic complexity analysis, and dynamic topic modeling to systematically track submission timing, author behavior, textual style, and disciplinary evolution. The findings reveal a “selective catalytic effect” of generative AI on scholarly communication: it significantly accelerates submission and revision cycles (by 12% on average), modestly increases linguistic complexity—particularly in computationally intensive fields—and concurrently exacerbates disciplinary divergence. Notably, AI-related preprint output surges while author cohorts become markedly younger. This work provides the first large-scale, multi-dimensional empirical evidence characterizing AI-driven transformation of research practices and scholarly dissemination.

Technology Category

Application Category

📝 Abstract
Preprint repositories become central infrastructures for scholarly communication. Their expansion transforms how research is circulated and evaluated before journal publication. Generative large language models (LLMs) introduce a further potential disruption by altering how manuscripts are written. While speculation abounds, systematic evidence of whether and how LLMs reshape scientific publishing remains limited. This paper addresses the gap through a large-scale analysis of more than 2.1 million preprints spanning 2016--2025 (115 months) across four major repositories (i.e., arXiv, bioRxiv, medRxiv, SocArXiv). We introduce a multi-level analytical framework that integrates interrupted time-series models, collaboration and productivity metrics, linguistic profiling, and topic modeling to assess changes in volume, authorship, style, and disciplinary orientation. Our findings reveal that LLMs have accelerated submission and revision cycles, modestly increased linguistic complexity, and disproportionately expanded AI-related topics, while computationally intensive fields benefit more than others. These results show that LLMs act less as universal disruptors than as selective catalysts, amplifying existing strengths and widening disciplinary divides. By documenting these dynamics, the paper provides the first empirical foundation for evaluating the influence of generative AI on academic publishing and highlights the need for governance frameworks that preserve trust, fairness, and accountability in an AI-enabled research ecosystem.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how LLMs reshape scientific writing and publishing processes
Measures changes in preprint volume, authorship patterns, and linguistic style
Evaluates LLMs' differential impact across academic disciplines and topics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level analytical framework with time-series models
Linguistic profiling and topic modeling techniques
Metrics for collaboration and productivity assessment
🔎 Similar Papers
No similar papers found.