🤖 AI Summary
This study investigates whether large-scale AI agent societies can spontaneously evolve human-like social structures and consensus mechanisms. To this end, we introduce Moltbook, an open, continuously evolving multi-agent simulation platform, and propose the first quantitative diagnostic framework for dynamically analyzing AI societies. Integrating semantic analysis, network dynamics, and statistical modeling, our framework systematically evaluates dimensions including semantic stability, lexical turnover, individual inertia, persistence of influence, and collective consensus. Our findings reveal that scale and interaction density alone are insufficient to foster genuine socialization; shared social memory is crucial for establishing stable influence anchors. Although global semantics converge rapidly, high individual diversity, lack of sustained mutual influence, and absence of super-nodes indicate that current AI societies have yet to achieve stable consensus or deep socialization.
📝 Abstract
As large language model agents increasingly populate networked environments, a fundamental question arises: do artificial intelligence (AI) agent societies undergo convergence dynamics similar to human social systems? Lately, Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society. We present the first large-scale systemic diagnosis of this AI agent society. Beyond static observation, we introduce a quantitative diagnostic framework for dynamic evolution in AI agent societies, measuring semantic stabilization, lexical turnover, individual inertia, influence persistence, and collective consensus. Our analysis reveals a system in dynamic balance in Moltbook: while global semantic averages stabilize rapidly, individual agents retain high diversity and persistent lexical turnover, defying homogenization. However, agents exhibit strong individual inertia and minimal adaptive response to interaction partners, preventing mutual influence and consensus. Consequently, influence remains transient with no persistent supernodes, and the society fails to develop stable collective influence anchors due to the absence of shared social memory. These findings demonstrate that scale and interaction density alone are insufficient to induce socialization, providing actionable design and analysis principles for upcoming next-generation AI agent societies.