🤖 AI Summary
The proliferation of scientific denialism and misinformation on social media, coupled with low public media literacy, poses significant challenges to democratic discourse. Method: This study designed and implemented an interdisciplinary seminar in which students developed role-playing AI chatbots using the Rasa framework; leveraging large language models (LLMs), the bots simulated science deniers to generate multi-turn, structured pseudoscientific dialogues for immersive, interactive training. Contribution/Results: The project empirically validates the technical feasibility and pedagogical efficacy of LLM-driven, role-based dialogue for media literacy education. It yields a reusable pedagogical paradigm and a functional tool prototype for misinformation resilience training, thereby advancing AI’s role in cultivating informational resilience within democratic societies.
📝 Abstract
In recent times, discussions on social media platforms have increasingly come under scrutiny due to the proliferation of science denial and fake news. Traditional solutions, such as regulatory actions, have been implemented to mitigate the spread of misinformation; however, these measures alone are not sufficient. To complement these efforts, educational approaches are becoming essential in empowering users to critically engage with misinformation. Conversation training, through serious games or personalized methods, has emerged as a promising strategy to help users handle science denial and toxic conversation tactics. This paper suggests an interdisciplinary seminar to explore the suitability of Large Language Models (LLMs) acting as a persona of a science denier to support people in identifying misinformation and improving resilience against toxic interactions. In the seminar, groups of four to five students will develop an AI-based chatbot that enables realistic interactions with science-denial argumentation structures. The task involves planning the setting, integrating a Large Language Model to facilitate natural dialogues, implementing the chatbot using the RASA framework, and evaluating the outcomes in a user study. It is crucial that users understand what they need to do during the interaction, how to conclude it, and how the relevant information is conveyed. The seminar does not aim to develop chatbots for practicing debunking but serves to teach AI technologies and test the feasibility of this idea for future applications. The chatbot seminar is conducted as a hybrid, parallel master's module at the participating educational institutions.