LLM or Human? Perceptions of Trust and Information Quality in Research Summaries

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how readers perceive and evaluate scientific abstracts generated or edited by large language models (LLMs) and how such perceptions influence judgments of scientific credibility and quality. Employing a mixed-methods approach—integrating surveys, behavioral experiments, and qualitative interviews—the research examines whether readers with machine learning backgrounds can distinguish between human- and LLM-authored content and how their beliefs about the extent of AI involvement shape their assessments. The study reveals, for the first time, that abstracts edited by LLMs receive significantly higher ratings than those produced solely by humans or entirely by LLMs. Although participants struggle to accurately identify the source of the text, their perceived level of AI involvement strongly affects trust judgments. Additionally, three distinct attitudes toward AI-assisted writing are identified, offering empirical grounding for evolving norms in academic publishing.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used to generate and edit scientific abstracts, yet their integration into academic writing raises questions about trust, quality, and disclosure. Despite growing adoption, little is known about how readers perceive LLM-generated summaries and how these perceptions influence evaluations of scientific work. This paper presents a mixed-methods survey experiment investigating whether readers with ML expertise can distinguish between human- and LLM-generated abstracts, how actual and perceived LLM involvement affects judgments of quality and trustworthiness, and what orientations readers adopt toward AI-assisted writing. Our findings show that participants struggle to reliably identify LLM-generated content, yet their beliefs about LLM involvement significantly shape their evaluations. Notably, abstracts edited by LLMs are rated more favorably than those written solely by humans or LLMs. We also identify three distinct reader orientations toward LLM-assisted writing, offering insights into evolving norms and informing policy around disclosure and acceptable use in scientific communication.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
trust
information quality
scientific communication
AI-assisted writing
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated abstracts
trust perception
information quality
AI-assisted writing
disclosure norms
🔎 Similar Papers
No similar papers found.
N
Nil-Jana Akpinar
Microsoft, USA
S
Sandeep Avula
Amazon AWS AI, USA
C
Cj Lee
Amazon AWS AI, USA
B
Brandon Dang
Amazon AWS AI, USA
K
Kaza Razat
Amazon AWS AI, USA
Vanessa Murdock
Vanessa Murdock
Amazon Research
Information RetrievalContent ModerationResponsible AIeCommerce