The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing

📅 2024-06-27
🏛️ Computers in Human Behavior
📈 Citations: 19
Influential: 0
📄 PDF
🤖 AI Summary
The widespread adoption of generative AI (GenAI) in scholarly writing has created challenges for peer reviewers in reliably detecting AI-assisted manuscripts, raising concerns about review integrity and fairness. Method: We conducted a fragmented online experiment with 17 reviewers from top-tier HCI conferences, combining qualitative content analysis, subjective evaluation coding, and cross-case comparison. Contribution/Results: This is the first empirical study to demonstrate that reviewers cannot reliably distinguish AI-generated from human-written text, yet inter-reviewer consistency remains unaffected. While GenAI enhances linguistic readability and lexical diversity, it diminishes depth of methodological detail and authorial reflexivity—eroding perceived “human warmth.” We propose the “tool-agnostic review” principle, advocating that assessment focus exclusively on research substance rather than authoring tools, and call for revised review guidelines centered on scholarly rigor and conceptual contribution.

Technology Category

Application Category

📝 Abstract
Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a"human touch"and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.
Problem

Research questions and friction points this paper is trying to address.

Investigating how peer reviewers detect AI-augmented research manuscripts
Examining reviewer biases against generative AI in academic writing evaluation
Addressing loss of human touch in AI-assisted research writing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online survey with peer reviewers
Analyzed AI impact on readability
Advocated impartial evaluation guidelines
🔎 Similar Papers
No similar papers found.