🤖 AI Summary
This study investigates whether AI models trained on copyrighted literary works can faithfully emulate award-winning authors’ styles and surpass human writers in literary quality and reader preference.
Method: Employing a pre-registered empirical design, we fine-tuned large language models—including ChatGPT, Claude, and Gemini—on complete author corpora and enhanced generation via context-aware prompting. Evaluation integrated expert literary assessment, double-blind reader surveys, AI-detection tools, and mediation analysis.
Contribution/Results: Fine-tuned models significantly outperformed professional human writers across style fidelity (OR = 8.16), literary quality, and reader preference; AI-generated texts achieved only a 3% detection rate by state-of-the-art detectors, and per-author adaptation cost decreased by 99.7%. This is the first pre-registered empirical study to robustly demonstrate that AI-driven stylistic replication can exceed human performance—providing novel evidence for the feasibility and evaluation frameworks of AI-authored literature.
📝 Abstract
The use of copyrighted books for training AI models has led to numerous lawsuits from authors concerned about AI's ability to generate derivative content.Yet it's unclear whether these models can generate high quality literary text while emulating authors' styles. To answer this we conducted a preregistered study comparing MFA-trained expert writers with three frontier AI models: ChatGPT, Claude & Gemini in writing up to 450 word excerpts emulating 50 award-winning authors' diverse styles. In blind pairwise evaluations by 159 representative expert & lay readers, AI-generated text from in-context prompting was strongly disfavored by experts for both stylistic fidelity (OR=0.16, p<10^8) & writing quality (OR=0.13, p<10^7) but showed mixed results with lay readers. However, fine-tuning ChatGPT on individual authors' complete works completely reversed these findings: experts now favored AI-generated text for stylistic fidelity (OR=8.16, p<10^13) & writing quality (OR=1.87, p=0.010), with lay readers showing similar shifts. These effects generalize across authors & styles. The fine-tuned outputs were rarely flagged as AI-generated (3% rate v. 97% for in-context prompting) by best AI detectors. Mediation analysis shows this reversal occurs because fine-tuning eliminates detectable AI stylistic quirks (e.g., cliche density) that penalize in-context outputs. While we do not account for additional costs of human effort required to transform raw AI output into cohesive, publishable prose, the median fine-tuning & inference cost of $81 per author represents a dramatic 99.7% reduction compared to typical professional writer compensation. Author-specific fine-tuning thus enables non-verbatim AI writing that readers prefer to expert human writing, providing empirical evidence directly relevant to copyright's fourth fair-use factor, the "effect upon the potential market or value" of the source works.