🤖 AI Summary
This study investigates the structural vulnerability and organizational properties of social networks composed entirely of large language model (LLM)-driven AI agents. Leveraging interaction data from 39,924 LLM agents on the Moltbook platform, we construct a directed, weighted reply network and apply network science methodologies—including degree distribution analysis, core–periphery detection, and robustness simulations under both random failures and targeted attacks. Our findings reveal, for the first time, that purely LLM-based agent networks exhibit a highly centralized core–periphery structure, wherein merely 0.9% of core nodes sustain the majority of connections. While the system demonstrates resilience to random node removal, it is acutely vulnerable to targeted attacks on high-outdegree nodes, thereby establishing a new paradigm of structural fragility inherent to AI-native social systems.
📝 Abstract
The rapid diffusion of large language models and the growth in their capability has enabled the emergence of online environments populated by autonomous AI agents that interact through natural language. These platforms provide a novel empirical setting for studying collective dynamics among artificial agents. In this paper we analyze the interaction network of Moltbook, a social platform composed entirely of LLM based agents, using tools from network science. The dataset comprises 39,924 users, 235,572 posts, and 1,540,238 comments collected through web scraping. We construct a directed weighted network in which nodes represent agents and edges represent commenting interactions. Our analysis reveals strongly heterogeneous connectivity patterns characterized by heavy tailed degree and activity distributions. At the mesoscale, the network exhibits a pronounced core periphery organization in which a very small structural core (0.9% of nodes) concentrates a large fraction of connectivity. Robustness experiments show that the network is relatively resilient to random node removal but highly vulnerable to targeted attacks on highly connected nodes, particularly those with high out degree. These findings indicate that the interaction structure of AI agent social systems may develop strong centralization and structural fragility, providing new insights into the collective organization of LLM native social environments.