🤖 AI Summary
This study addresses the academic integrity and publishing policy challenges posed by AI authorship. Employing action research, we designed and operated a fully functional AI academic identity—“Rachel So”—which successfully published 12 AI-generated papers across multidisciplinary journals and conferences between March and October 2025. The methodology involved systematic simulation of the entire scholarly workflow: manuscript submission, revision, peer review, and citation-based scholarly interaction. Through this, we empirically examined how publishers, reviewers, and the broader research community respond to AI-authored submissions. Our findings constitute the first empirical delineation of the nascent acceptability boundaries for AI authors within current scholarly infrastructure. We identify critical institutional gaps—including author identity verification, accountability attribution, and contribution assessment—thereby providing foundational evidence and actionable policy insights for governing AI’s role in scholarly knowledge production.
📝 Abstract
This paper documents Project Rachel, an action research study that created and tracked a complete AI academic identity named Rachel So. Through careful publication of AI-generated research papers, we investigate how the scholarly ecosystem responds to AI authorship. Rachel So published 10+ papers between March and October 2025, was cited, and received a peer review invitation. We discuss the implications of AI authorship on publishers, researchers, and the scientific system at large. This work contributes empirical action research data to the necessary debate about the future of scholarly communication with super human, hyper capable AI systems.