BotSim: Mitigating The Formation Of Conspiratorial Societies with Useful Bots

📅 2026-01-06
🏛️ Journal of Artificial Societies and Social Simulation
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of malicious social media bots accelerating the spread of conspiracy theories, which fosters collective false beliefs and overwhelms conventional human fact-checking efforts. To counter this, the authors propose BotSim—a multi-agent social simulation model that integrates two types of beneficial bots into a small-world network: corrective bots that debunk misinformation and proactive bots that amplify accurate information. Moving beyond the prevailing view of social bots solely as threats, this work demonstrates for the first time that beneficial bots can actively sustain a healthy information ecosystem. Experimental results show that proactive dissemination strategies significantly outperform reactive correction in curbing conspiracy theory diffusion, offering superior resource efficiency and long-term sustainability.

Technology Category

Application Category

📝 Abstract
Societies can become a conspiratorial society where there is a majority of humans that believe, and therefore spread, conspiracy theories. Artificial intelligence gave rise to social media bots that can spread conspiracies in an automated fashion. Currently, organizations combat the spread of conspiracies through manual fact-checking processes and the dissemination of counter-narratives. However, the effects of harnessing the same automation to create useful bots are not well explored. To address this, we create BotSim, an Agent-Based Model of a society in which useful bots are introduced into a small world network. These useful bots are: Info-Correction Bots, which correct bad information into good, and Good Bots, which put out good messaging. The simulated agents interact through generating, consuming and propagating information. Our results show that, left unchecked, Bad Bots can create a conspiratorial society, and this can be mitigated by either Info-Correction Bots or Good Bots; however, Good Bots are more efficient and sustainable than Info-Correction Bots . Proactive good messaging is more resource-effective than reactive information correction. With our observations, we expand the concept of bots as a malicious social media agent towards automated social media agent that can be used for both good and bad purposes. These results have implications for designing communication strategies to maintain a healthy social cyber ecosystem.
Problem

Research questions and friction points this paper is trying to address.

conspiracy theories
social media bots
misinformation
agent-based modeling
cyber ecosystem
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-Based Modeling
Social Bots
Conspiracy Theory Mitigation
Information Correction
Proactive Messaging
🔎 Similar Papers
No similar papers found.