๐ค AI Summary
This study addresses the challenge of insufficient fine-grained behavioral representation and interpretable attribution in rumor propagation modeling. Methodologically, it introduces the first LLM-driven, configurable social agent simulation framework: it employs role-aware, behavior-controllable large language model agents to simulate rumor diffusion across four canonical synthetic social network topologies, integrating structural generation, dynamic propagation evaluation, and attribution analysis protocols. Key contributions include: (1) high-fidelity simulation at scaleโsupporting networks of up to 100 nodes and 1,000 edges; (2) precise control over diffusion coverage (0%โ83%); and (3) the first systematic empirical validation of the synergistic impact of network structural heterogeneity and user role specification on rumor propagation pathways and reach. The framework establishes a reproducible, intervention-enabled simulation paradigm for computational social science and information ecosystem governance.
๐ Abstract
With the rise of social media, misinformation has become increasingly prevalent, fueled largely by the spread of rumors. This study explores the use of Large Language Model (LLM) agents within a novel framework to simulate and analyze the dynamics of rumor propagation across social networks. To this end, we design a variety of LLM-based agent types and construct four distinct network structures to conduct these simulations. Our framework assesses the effectiveness of different network constructions and agent behaviors in influencing the spread of rumors. Our results demonstrate that the framework can simulate rumor spreading across more than one hundred agents in various networks with thousands of edges. The evaluations indicate that network structure, personas, and spreading schemes can significantly influence rumor dissemination, ranging from no spread to affecting 83% of agents in iterations, thereby offering a realistic simulation of rumor spread in social networks.