🤖 AI Summary
This study addresses the risks of visual authenticity misattribution and disinformation posed by highly realistic AI-generated images (AIGIs). Employing a mixed-methods approach—quantitative content analysis, qualitative image interpretation, and cross-platform annotation—we conducted the first large-scale empirical investigation using a dataset of 30,824 authentic platform-sourced images collected from Instagram and Twitter. Results reveal that AIGIs frequently depict celebrities and political figures, exhibiting strong hyperrealism yet minimal detectable AI artifacts, coupled with professional-grade aesthetic quality—collectively increasing their likelihood of being misclassified as authentic photographs. The study identifies the critical “high-fidelity, low-detectability” characteristic of contemporary AIGIs, providing foundational empirical evidence for visual disinformation governance. It further proposes actionable, stakeholder-specific design interventions targeting platforms, creators, and end users to mitigate authenticity confusion and enhance societal resilience against synthetic media threats.
📝 Abstract
Advances in generative models have created Artificial Intelligence-Generated Images (AIGIs) nearly indistinguishable from real photographs. Leveraging a large corpus of 30,824 AIGIs collected from Instagram and Twitter, and combining quantitative content analysis with qualitative analysis, this study unpacks AI photorealism of AIGIs from four key dimensions, content, human, aesthetic, and production features. We find that photorealistic AIGIs often depict human figures, especially celebrities and politicians, with a high degree of surrealism and aesthetic professionalism, alongside a low degree of overt signals of AI production. This study is the first to empirically investigate photorealistic AIGIs across multiple platforms using a mixed-methods approach. Our findings provide important implications and insights for understanding visual misinformation and mitigating potential risks associated with photorealistic AIGIs. We also propose design recommendations to enhance the responsible use of AIGIs.