🤖 AI Summary
This study introduces and empirically tests the concept of “perceived harm”—negative evaluations and discrimination arising from others’ suspicion that an individual uses large language models (LLMs). Method: Three online experiments manipulated AI-use cues in fictitious freelance writer profiles and assessed their impact on writing quality ratings and hiring intentions using Likert-scale measures and statistical modeling (logistic regression, mediation analysis). Contribution/Results: Suspicion of AI use significantly reduced perceived writing quality and hiring likelihood; this effect was robust across demographic groups and amplified for certain marginalized populations. The study provides the first systematic empirical evidence of generative AI–induced stigma and its structural inequity consequences, advancing AI ethics and fairness research by proposing a novel theoretical framework grounded in rigorous experimental evidence.
📝 Abstract
Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools they may risk causing a variety of harms, disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harm, a term for the harm caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed human participants evaluating the profiles for fictional freelance writers. We asked participants whether they suspected the freelancers of using AI, the quality of their writing, and whether they should be hired. We found some support for perceptual harms against for certain demographic groups, but that perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.