🤖 AI Summary
The proliferation of deepfake images poses a severe threat to public trust and democratic processes, yet existing digital literacy interventions lack scalability and empirical validation. This study systematically compares five scalable educational strategies—text-based guidance, visual exemplars, gamified training, implicit learning, and explanations of AI generation principles—for improving deepfake detection accuracy in both short- and long-term settings. As the first longitudinal, empirical evaluation of multimodal digital literacy interventions, it demonstrates that the most effective approach boosts deepfake detection accuracy by up to 13 percentage points without compromising trust in authentic images. The findings empirically validate lightweight, deployable educational interventions and establish a dual-objective evaluation framework—balancing “detecting fakes” and “trusting truths”—thereby providing critical evidence for AI-era digital literacy policy and practice.
📝 Abstract
Deepfakes, i.e., images generated by artificial intelligence (AI), can erode trust in institutions and compromise election outcomes, as people often struggle to discern real images from deepfakes. Improving digital literacy can help address these challenges, yet scalable and effective approaches remain largely unexplored. Here, we compare the efficacy of five digital literacy interventions to boost people's ability to discern deepfakes: (1) textual guidance on common indicators of deepfakes; (2) visual demonstrations of these indicators; (3) a gamified exercise for identifying deepfakes; (4) implicit learning through repeated exposure and feedback; and (5) explanations of how deepfakes are generated with the help of AI. We conducted an experiment with N=1,200 participants from the United States to test the immediate and long-term effectiveness of our interventions. Our results show that our interventions can boost deepfake discernment by up to 13 percentage points while maintaining trust in real images. Altogether, our approach is scalable, suitable for diverse populations, and highly effective for boosting deepfake detection while maintaining trust in truthful information.