🤖 AI Summary
Amid escalating AI-generated disinformation, public capacity for critical evaluation is at risk of deterioration. Method: We conducted a four-week longitudinal human-AI collaborative news veracity judgment experiment, employing pretest–posttest design and unassisted independent assessments to measure changes in discrimination accuracy. Contribution/Results: Contrary to expectations, real-time AI assistance yielded only a short-term 21% accuracy gain; by week four, independent accuracy declined by 15.3%, confirming a “cognitive passivity” effect—overreliance on AI impairs metacognitive monitoring and critical appraisal abilities. We introduce a novel research paradigm integrating human-AI interaction experiments, dynamic assessment frameworks, and explainable AI decision support. This study provides the first empirical evidence of AI assistance’s dual-edged impact on information literacy, demonstrating that poorly designed AI tools may erode rather than enhance human judgment. Findings offer critical theoretical grounding and empirical support for developing trustworthy AI educational interventions that augment—not supplant—human cognitive agency.
📝 Abstract
Given the growing prevalence of fake information, including increasingly realistic AI-generated news, there is an urgent need to train people to better evaluate and detect misinformation. While interactions with AI have been shown to durably reduce people's beliefs in false information, it is unclear whether these interactions also teach people the skills to discern false information themselves. We conducted a month-long study where 67 participants classified news headline-image pairs as real or fake, discussed their assessments with an AI system, followed by an unassisted evaluation of unseen news items to measure accuracy before, during, and after AI assistance. While AI assistance produced immediate improvements during AI-assisted sessions (+21% average), participants' unassisted performance on new items declined significantly by week 4 (-15.3%). These results indicate that while AI may help immediately, it ultimately degrades long-term misinformation detection abilities.