Simulating hashtag dynamics with networked groups of generative agents

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how collective communication in narrative media shapes individual belief formation and drives societal consensus or polarization. Method: We propose a novel paradigm of LLM-based generative agent networks to simulate the dynamic evolution of hashtags on social media, implementing a multi-agent simulation framework that integrates structured prompt engineering—embedding social context and domain-specific prior knowledge—and employs network science metrics (e.g., response entropy) to quantify information diffusion mechanisms. Contribution/Results: We find that social rewards and background knowledge critically modulate hashtag generation; agent networks reproduce human response consistency under simplified conditions; and politically sensitive content requires fine-grained prompt design to ensure fidelity. This work represents the first systematic application of LLM-driven generative agents to narrative diffusion modeling, offering a computationally tractable and interpretable framework for analyzing belief evolution in digital environments.

Technology Category

Application Category

📝 Abstract
Networked environments shape how information embedded in narratives influences individual and group beliefs and behavior. This raises key questions about how group communication around narrative media impacts belief formation and how such mechanisms contribute to the emergence of consensus or polarization. Language data from generative agents offer insight into how naturalistic forms of narrative interactions (such as hashtag generation) evolve in response to social rewards within networked communication settings. To investigate this, we developed an agent-based modeling and simulation framework composed of networks of interacting Large Language Model (LLM) agents. We benchmarked the simulations of four state-of-the-art LLMs against human group behaviors observed in a prior network experiment (Study 1) and against naturally occurring hashtags from Twitter (Study 2). Quantitative metrics of network coherence (e.g., entropy of a group's responses) reveal that while LLMs can approximate human-like coherence in sanitized domains (Study 1's experimental data), effective integration of background knowledge and social context in more complex or politically sensitive narratives likely requires careful and structured prompting.
Problem

Research questions and friction points this paper is trying to address.

Modeling how networked communication shapes belief formation
Investigating narrative-driven consensus or polarization emergence
Benchmarking LLM agents against human social behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-based modeling with networked LLM agents
Benchmarking simulations against human group behaviors
Structured prompting for social context integration
🔎 Similar Papers
No similar papers found.