Crafting Synthetic Realities: Examining Visual Realism and Misinformation Potential of Photorealistic AI-Generated Images

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the risks of visual authenticity misattribution and disinformation posed by highly realistic AI-generated images (AIGIs). Employing a mixed-methods approach—quantitative content analysis, qualitative image interpretation, and cross-platform annotation—we conducted the first large-scale empirical investigation using a dataset of 30,824 authentic platform-sourced images collected from Instagram and Twitter. Results reveal that AIGIs frequently depict celebrities and political figures, exhibiting strong hyperrealism yet minimal detectable AI artifacts, coupled with professional-grade aesthetic quality—collectively increasing their likelihood of being misclassified as authentic photographs. The study identifies the critical “high-fidelity, low-detectability” characteristic of contemporary AIGIs, providing foundational empirical evidence for visual disinformation governance. It further proposes actionable, stakeholder-specific design interventions targeting platforms, creators, and end users to mitigate authenticity confusion and enhance societal resilience against synthetic media threats.

Technology Category

Application Category

📝 Abstract
Advances in generative models have created Artificial Intelligence-Generated Images (AIGIs) nearly indistinguishable from real photographs. Leveraging a large corpus of 30,824 AIGIs collected from Instagram and Twitter, and combining quantitative content analysis with qualitative analysis, this study unpacks AI photorealism of AIGIs from four key dimensions, content, human, aesthetic, and production features. We find that photorealistic AIGIs often depict human figures, especially celebrities and politicians, with a high degree of surrealism and aesthetic professionalism, alongside a low degree of overt signals of AI production. This study is the first to empirically investigate photorealistic AIGIs across multiple platforms using a mixed-methods approach. Our findings provide important implications and insights for understanding visual misinformation and mitigating potential risks associated with photorealistic AIGIs. We also propose design recommendations to enhance the responsible use of AIGIs.
Problem

Research questions and friction points this paper is trying to address.

Examines visual realism and misinformation potential of AI-generated images.
Investigates photorealism across content, human, aesthetic, and production features.
Proposes design recommendations for responsible use of AI-generated images.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes 30,824 AI-generated images from social media
Combines quantitative and qualitative methods for realism assessment
Proposes design recommendations for responsible AI image use
🔎 Similar Papers
No similar papers found.