🤖 AI Summary
This study investigates how readers perceive and evaluate scientific abstracts generated or edited by large language models (LLMs) and how such perceptions influence judgments of scientific credibility and quality. Employing a mixed-methods approach—integrating surveys, behavioral experiments, and qualitative interviews—the research examines whether readers with machine learning backgrounds can distinguish between human- and LLM-authored content and how their beliefs about the extent of AI involvement shape their assessments. The study reveals, for the first time, that abstracts edited by LLMs receive significantly higher ratings than those produced solely by humans or entirely by LLMs. Although participants struggle to accurately identify the source of the text, their perceived level of AI involvement strongly affects trust judgments. Additionally, three distinct attitudes toward AI-assisted writing are identified, offering empirical grounding for evolving norms in academic publishing.
📝 Abstract
Large Language Models (LLMs) are increasingly used to generate and edit scientific abstracts, yet their integration into academic writing raises questions about trust, quality, and disclosure. Despite growing adoption, little is known about how readers perceive LLM-generated summaries and how these perceptions influence evaluations of scientific work. This paper presents a mixed-methods survey experiment investigating whether readers with ML expertise can distinguish between human- and LLM-generated abstracts, how actual and perceived LLM involvement affects judgments of quality and trustworthiness, and what orientations readers adopt toward AI-assisted writing. Our findings show that participants struggle to reliably identify LLM-generated content, yet their beliefs about LLM involvement significantly shape their evaluations. Notably, abstracts edited by LLMs are rated more favorably than those written solely by humans or LLMs. We also identify three distinct reader orientations toward LLM-assisted writing, offering insights into evolving norms and informing policy around disclosure and acceptable use in scientific communication.