Authorship Without Writing: Large Language Models and the Senior Author Analogy

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the contested authorship status of large language models (LLMs) in scientific writing. Motivated by the absence of consensus in bioethics and publishing guidelines regarding AI eligibility for authorship, the study employs philosophical analysis grounded in authoritative authorship criteria—particularly those of the International Committee of Medical Journal Editors (ICMJE). It proposes, for the first time, an analogical framework: LLM generation of complete manuscript drafts is conceptually equivalent to the supervisory contributions of senior authors—namely, defining research scope, ensuring scientific integrity, and upholding scholarly accountability. This analogy implies that if current norms recognize non-executional, high-level intellectual contributions as sufficient for authorship, then systematic LLM text generation likewise qualifies; otherwise, foundational revisions to authorship criteria are required. The work challenges entrenched authorship paradigms and provides a principled ethical framework and normative basis for updating academic attribution policies in the AI era.

Technology Category

Application Category

📝 Abstract
The use of large language models (LLMs) in bioethical, scientific, and medical writing remains controversial. While there is broad agreement in some circles that LLMs cannot count as authors, there is no consensus about whether and how humans using LLMs can count as authors. In many fields, authorship is distributed among large teams of researchers, some of whom, including paradigmatic senior authors who guide and determine the scope of a project and ultimately vouch for its integrity, may not write a single word. In this paper, we argue that LLM use (under specific conditions) is analogous to a form of senior authorship. On this view, the use of LLMs, even to generate complete drafts of research papers, can be considered a legitimate form of authorship according to the accepted criteria in many fields. We conclude that either such use should be recognized as legitimate, or current criteria for authorship require fundamental revision. AI use declaration: GPT-5 was used to help format Box 1. AI was not used for any other part of the preparation or writing of this manuscript.
Problem

Research questions and friction points this paper is trying to address.

Debating authorship legitimacy for human-LLM collaborative writing
Comparing LLM use to senior author roles in research teams
Reconciling AI-generated content with existing authorship criteria
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs analogous to senior authorship
LLMs generate complete research drafts
Legitimate authorship under specific conditions
🔎 Similar Papers
No similar papers found.
C
Clint Hurshman
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
S
Sebastian Porsdam Mann
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Advanced Studies in Bioscience Innovation Law, University of Copenhagen, Copenhagen, Denmark
Julian Savulescu
Julian Savulescu
Chen Su Lan Centennial Professor of Medical Ethics
Medical ethicspractical ethicsapplied ethicsbioethicsneuroethics
Brian D. Earp
Brian D. Earp
Associate Professor, National University of Singapore and Research Associate, University of Oxford
BioethicsPhilosophy of Science & AIRelational Moral PsychologySex & GenderChildren's Rights