Understanding Reader Perception Shifts upon Disclosure of AI Authorship

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how disclosing AI writing assistance affects readers’ perceptions of authors. Through a controlled experiment (N=261) spanning six communicative contexts, and integrating quantitative statistical analysis with thematic coding of 990 reader feedback responses, we find that AI disclosure consistently reduces perceived author credibility, warmth, competence, and likability—particularly diminishing perceptions of authenticity and effort in socially oriented texts. Crucially, users’ AI literacy significantly mitigates these negative effects. This work is the first to empirically demonstrate contextual heterogeneity in AI disclosure effects, revealing that impact magnitude and valence vary systematically across communicative settings. We therefore propose the principle of “context-sensitive transparency design” for AI-assisted writing systems. Our findings provide empirical grounding and actionable guidelines for ethical AI deployment and human-AI collaborative interface design in writing support technologies.

Technology Category

Application Category

📝 Abstract
As AI writing support becomes ubiquitous, how disclosing its use affects reader perception remains a critical, underexplored question. We conducted a study with 261 participants to examine how revealing varying levels of AI involvement shifts author impressions across six distinct communicative acts. Our analysis of 990 responses shows that disclosure generally erodes perceptions of trustworthiness, caring, competence, and likability, with the sharpest declines in social and interpersonal writing. A thematic analysis of participants' feedback links these negative shifts to a perceived loss of human sincerity, diminished author effort, and the contextual inappropriateness of AI. Conversely, we find that higher AI literacy mitigates these negative perceptions, leading to greater tolerance or even appreciation for AI use. Our results highlight the nuanced social dynamics of AI-mediated authorship and inform design implications for creating transparent, context-sensitive writing systems that better preserve trust and authenticity.
Problem

Research questions and friction points this paper is trying to address.

How AI authorship disclosure affects reader perception of authors
Examining how AI involvement shifts impressions of trustworthiness and competence
Understanding how AI literacy mitigates negative perceptions of AI writing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disclosure study reveals AI authorship perception shifts
Higher AI literacy mitigates negative perception effects
Designing context-sensitive transparent writing systems
🔎 Similar Papers
No similar papers found.
H
Hiroki Nakano
IIS Lab, The University of Tokyo, Tokyo, Japan
J
Jo Takezawa
IIS Lab, The University of Tokyo, Tokyo, Japan
F
Fabrice Matulic
Preferred Networks Inc., Tokyo, Japan
Chi-Lan Yang
Chi-Lan Yang
The University of Tokyo
Human-Computer InteractionComputer-Supported Cooperative WorkMediated Communication
Koji Yatani
Koji Yatani
University of Tokyo
Human-Computer InteractionUbiquitous ComputingAI/IoT ApplicationsDigital HealthcareUsable Security