MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users on online social platforms assess content authenticity and how moderation strategies affect misinformation diffusion and user engagement. We propose the first multi-agent simulation framework integrating LLM-driven interpretable social agents into large-scale dynamic directed social graphs, incorporating fine-grained personality representations and reasoning-based behavioral modeling to simulate realistic interactions—including likes, shares, and reports. Our approach reveals a path-dependent mechanism underlying deceptive content propagation. We demonstrate that moderate intervention significantly suppresses misinformation spread (average reduction of 37.2%) while concurrently increasing user interaction rates (+12.8%). Empirical validation confirms the efficacy of three distinct moderation strategies. Moreover, we identify systematic, modelable discrepancies between agents’ self-reported reasoning and their actual behavioral outputs—enabling improved alignment between stated intent and observed action in agent-based social simulations.

Technology Category

Application Category

📝 Abstract
We present a novel, open-source social network simulation framework, MOSAIC, where generative language agents predict user behaviors such as liking, sharing, and flagging content. This simulation combines LLM agents with a directed social graph to analyze emergent deception behaviors and gain a better understanding of how users determine the veracity of online social content. By constructing user representations from diverse fine-grained personas, our system enables multi-agent simulations that model content dissemination and engagement dynamics at scale. Within this framework, we evaluate three different content moderation strategies with simulated misinformation dissemination, and we find that they not only mitigate the spread of non-factual content but also increase user engagement. In addition, we analyze the trajectories of popular content in our simulations, and explore whether simulation agents' articulated reasoning for their social interactions truly aligns with their collective engagement patterns. We open-source our simulation software to encourage further research within AI and social sciences.
Problem

Research questions and friction points this paper is trying to address.

Modeling user behaviors in social networks using generative AI agents
Analyzing deception and content veracity in online social interactions
Evaluating content moderation strategies for misinformation and engagement
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents simulate user behaviors
Directed social graph analyzes deception
Fine-grained personas enable multi-agent simulations
🔎 Similar Papers
No similar papers found.
Genglin Liu
Genglin Liu
University of California, Los Angeles
Natural Language Processing
Salman Rahman
Salman Rahman
University of California Los Angeles
Machine LearningNatural Language ProcessingLanguage Modeling
E
Elisa Kreiss
University of California, Los Angeles
M
Marzyeh Ghassemi
MIT CSAIL
S
Saadia Gabriel
University of California, Los Angeles