🤖 AI Summary
Current AI policies adopted by academic journals fail to effectively curb the surge in AI-assisted writing and suffer from widespread deficiencies in transparency and enforceability.
Method: This study conducts the first global-scale empirical assessment, analyzing over 5,000 journals and 5.2 million papers through large-scale metadata and full-text mining, multilingual natural language processing, and statistical modeling.
Contribution/Results: Although 70% of surveyed journals have implemented AI policies, fewer than 0.1% (76 out of 75,000) of papers published post-2023 explicitly disclose AI use—indicating near-total policy noncompliance and revealing a pronounced “transparency gap.” The analysis identifies critical failure domains, including inconsistent policy definitions, absent enforcement mechanisms, and inadequate author guidance. These findings provide foundational empirical evidence and an interdisciplinary methodological framework to reform scholarly publishing ethics and advance responsible, accountable AI integration in scientific communication.
📝 Abstract
The rapid integration of generative AI into academic writing has prompted widespread policy responses from journals and publishers. However, the effectiveness of these policies remains unclear. Here, we analyze 5,114 journals and over 5.2 million papers to evaluate the real-world impact of AI usage guidelines. We show that despite 70% of journals adopting AI policies (primarily requiring disclosure), researchers' use of AI writing tools has increased dramatically across disciplines, with no significant difference between journals with or without policies. Non-English-speaking countries, physical sciences, and high-OA journals exhibit the highest growth rates. Crucially, full-text analysis on 164k scientific publications reveals a striking transparency gap: Of the 75k papers published since 2023, only 76 (0.1%) explicitly disclosed AI use. Our findings suggest that current policies have largely failed to promote transparency or restrain AI adoption. We urge a re-evaluation of ethical frameworks to foster responsible AI integration in science.