Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Amid escalating AI-generated disinformation, public capacity for critical evaluation is at risk of deterioration. Method: We conducted a four-week longitudinal human-AI collaborative news veracity judgment experiment, employing pretest–posttest design and unassisted independent assessments to measure changes in discrimination accuracy. Contribution/Results: Contrary to expectations, real-time AI assistance yielded only a short-term 21% accuracy gain; by week four, independent accuracy declined by 15.3%, confirming a “cognitive passivity” effect—overreliance on AI impairs metacognitive monitoring and critical appraisal abilities. We introduce a novel research paradigm integrating human-AI interaction experiments, dynamic assessment frameworks, and explainable AI decision support. This study provides the first empirical evidence of AI assistance’s dual-edged impact on information literacy, demonstrating that poorly designed AI tools may erode rather than enhance human judgment. Findings offer critical theoretical grounding and empirical support for developing trustworthy AI educational interventions that augment—not supplant—human cognitive agency.

Technology Category

Application Category

📝 Abstract
Given the growing prevalence of fake information, including increasingly realistic AI-generated news, there is an urgent need to train people to better evaluate and detect misinformation. While interactions with AI have been shown to durably reduce people's beliefs in false information, it is unclear whether these interactions also teach people the skills to discern false information themselves. We conducted a month-long study where 67 participants classified news headline-image pairs as real or fake, discussed their assessments with an AI system, followed by an unassisted evaluation of unseen news items to measure accuracy before, during, and after AI assistance. While AI assistance produced immediate improvements during AI-assisted sessions (+21% average), participants' unassisted performance on new items declined significantly by week 4 (-15.3%). These results indicate that while AI may help immediately, it ultimately degrades long-term misinformation detection abilities.
Problem

Research questions and friction points this paper is trying to address.

AI dialogues reduce misinformation beliefs but fail to build lasting discernment skills
AI assistance improves immediate detection but degrades long-term unassisted accuracy
Training people to independently evaluate fake news remains an unresolved challenge
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI dialogue system reduces misinformation beliefs
Month-long study tests AI-assisted news evaluation
AI assistance degrades long-term detection skills
🔎 Similar Papers
No similar papers found.