Can dialogues with AI systems help humans better discern visual misinformation?

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether human-AI dialogue can enhance immediate human discernment of AI-generated images and associated fabricated news. Method: An pretest-posttest controlled experiment with 80 participants engaged in 1,310 structured human-AI dialogues—guided by prompt engineering and grounded in a manually annotated dataset of image-text misinformation pairs. Contribution/Results: We provide the first empirical evidence that dialogue-based intervention significantly improves immediate detection accuracy from 60% to 90% (p < 0.001). However, this gain exhibits no cross-sample transferability: when AI assistance is removed, accuracy on novel samples reverts to baseline levels (60%, p = 0.88). These findings reveal that current interactive interventions yield strong but transient cognitive effects, failing to engender durable epistemic improvement. The results highlight a fundamental limitation of “use-and-discard” learning in human-AI collaboration for misinformation detection and establish critical empirical boundaries for designing transferable AI literacy interventions.

Technology Category

Application Category

📝 Abstract
The widespread emergence of manipulated news media content poses significant challenges to online information integrity. This study investigates whether dialogues with AI about AI-generated images and associated news statements can increase human discernment abilities and foster short-term learning in detecting misinformation. We conducted a study with 80 participants who engaged in structured dialogues with an AI system about news headline-image pairs, generating 1,310 human-AI dialogue exchanges. Results show that AI interaction significantly boosts participants' accuracy in identifying real versus fake news content from approximately 60% to 90% (p$<$0.001). However, these improvements do not persist when participants are presented with new, unseen image-statement pairs without AI assistance, with accuracy returning to baseline levels (~60%, p=0.88). These findings suggest that while AI systems can effectively change immediate beliefs about specific content through persuasive dialogue, they may not produce lasting improvements that transfer to novel examples, highlighting the need for developing more effective interventions that promote durable learning outcomes.
Problem

Research questions and friction points this paper is trying to address.

AI dialogues improve human detection of visual misinformation
Short-term learning gains from AI interactions are not sustained
Need for durable interventions in misinformation detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI dialogues enhance visual misinformation detection
Structured human-AI interactions boost accuracy
Short-term learning lacks transfer to new examples
🔎 Similar Papers
No similar papers found.