🤖 AI Summary
This study investigates the efficacy and unintended cognitive consequences of AI-generated image labels for misinformation detection. Employing a mixed-methods approach—five focus groups for qualitative analysis plus a pre-registered, cross-regional online experiment (N = 1,300+ participants from North America and Europe)—we identify a novel phenomenon: “trust shift.” While labels improve AI-image identification accuracy, they induce systematic trust distortion: credibility of *misleading* AI-generated images decreases by 19%, yet credibility of *benign* AI-generated images drops by 14%; concurrently, credibility of *misleading* human-created images increases anomalously by 22%. These findings demonstrate that current labeling strategies fail to calibrate user trust accurately, instead promoting overreliance on labels and impairing critical evaluation of authentic content. The results challenge the implicit assumption that labeling inherently enhances trustworthiness, revealing critical cognitive boundaries for AI-content governance and urging caution in label-based interventions.
📝 Abstract
Generative artificial intelligence is developing rapidly, impacting humans' interaction with information and digital media. It is increasingly used to create deceptively realistic misinformation, so lawmakers have imposed regulations requiring the disclosure of AI-generated content. However, only little is known about whether these labels reduce the risks of AI-generated misinformation. Our work addresses this research gap. Focusing on AI-generated images, we study the implications of labels, including the possibility of mislabeling. Assuming that simplicity, transparency, and trust are likely to impact the successful adoption of such labels, we first qualitatively explore users' opinions and expectations of AI labeling using five focus groups. Second, we conduct a pre-registered online survey with over 1300 U.S. and EU participants to quantitatively assess the effect of AI labels on users' ability to recognize misinformation containing either human-made or AI-generated images. Our focus groups illustrate that, while participants have concerns about the practical implementation of labeling, they consider it helpful in identifying AI-generated images and avoiding deception. However, considering security benefits, our survey revealed an ambiguous picture, suggesting that users might over-rely on labels. While inaccurate claims supported by labeled AI-generated images were rated less credible than those with unlabeled AI-images, the belief in accurate claims also decreased when accompanied by a labeled AI-generated image. Moreover, we find the undesired side effect that human-made images conveying inaccurate claims were perceived as more credible in the presence of labels.