🤖 AI Summary
Existing online anti-rumor games are predominantly single-player, static, and exhibit low replayability, thereby oversimplifying the complex dynamics of misinformation diffusion and debunking. To address this, we propose the first LLM-driven two-player adversarial game framework: one player assumes the role of an “information disseminator,” the other a “debunker,” engaging in real-time, dynamic interactions with LLM-simulated heterogeneous public agents to generate or refute false claims—enabling open-ended strategic gameplay and iterative argumentation. The system leverages LLMs as public agents, content generators, and credibility evaluators. A mixed-methods study (N=47) demonstrates statistically significant improvements in media literacy and misinformation detection ability. Qualitative analysis further confirms that players deepen their understanding of information manipulation tactics and evidence-based debunking mechanisms through strategic engagement. This work advances interactive, pedagogically grounded misinformation resilience training via AI-augmented adversarial simulation.
📝 Abstract
Game-based interventions are widely used to combat misinformation online by employing the"inoculation approach". However, most current interventions are designed as single-player games, presenting players with limited predefined choices. Such restrictions reduce replayability and may lead to an overly simplistic understanding of the processes of misinformation phenomenon and the debunking. This study seeks to address these issues, and empower people to better understand the opinion influencing and misinformation debunking processes. We did this by creating a Player versus Player (PvP) game where participants attempt to either generate or debunk misinformation to convince LLM-represented public opinion. Using a within-subjects mixed-methods study design (N=47), we found that this game significantly raised participants' media literacy and improved their ability to identify misinformation. Our qualitative exploration revealed how participants' use of debunking and content creation strategies deepened their understanding of the nature of disinformation. We demonstrate how LLMs can be integrated into PvP games to foster greater understanding of contrasting viewpoints and highlight social challenges.