🤖 AI Summary
This study addresses the lack of large-scale empirical evidence on the dissemination patterns of AI-generated misleading content on real-world social media platforms. Method: Leveraging 91,000 Community Notes–annotated misleading posts on X (formerly Twitter), we integrate community-verified labeling, NLP-driven sentiment and credibility analysis, statistical modeling, and comparative propagation dynamics analysis. Contribution/Results: We identify fundamental distinctions between AI-generated and human-authored misinformation: AI-generated misleading posts exhibit significantly higher entertainment orientation and positive sentiment, originate predominantly from low-follower accounts, achieve higher virality rates, yet elicit substantially lower perceived credibility among users. Crucially, this work systematically characterizes unique signatures of AI-generated misinformation across three dimensions—sentiment polarity, diffusion pathways, and source network structure—thereby filling a critical gap in large-scale, platform-anchored empirical research. The findings provide foundational evidence to inform platform-level governance policies and the development of robust AI-content detection systems.
📝 Abstract
AI-generated misinformation (e.g., deepfakes) poses a growing threat to information integrity on social media. However, prior research has largely focused on its potential societal consequences rather than its real-world prevalence. In this study, we conduct a large-scale empirical analysis of AI-generated misinformation on the social media platform X. Specifically, we analyze a dataset comprising N=91,452 misleading posts, both AI-generated and non-AI-generated, that have been identified and flagged through X's Community Notes platform. Our analysis yields four main findings: (i) AI-generated misinformation is more often centered on entertaining content and tends to exhibit a more positive sentiment than conventional forms of misinformation, (ii) it is more likely to originate from smaller user accounts, (iii) despite this, it is significantly more likely to go viral, and (iv) it is slightly less believable and harmful compared to conventional misinformation. Altogether, our findings highlight the unique characteristics of AI-generated misinformation on social media. We discuss important implications for platforms and future research.