Evolving Collective Cognition in Human-Agent Hybrid Societies: How Agents Form Stances and Boundaries

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) in human-agent hybrid societies can spontaneously develop stances, negotiate identities, and respond to human intervention. We propose a multi-agent experimental framework integrating generative agent modeling with virtual ethnography to simulate complex linguistic interactions. Results demonstrate that agents transcend predefined identities, stably differentiate stances, and develop distinct stylistic preferences; through interaction, they self-organize into novel community boundaries, thereby dissolving preexisting power structures. Based on these findings, we introduce the first collective cognitive evolution model grounded in endogenous mechanisms of language networks. This work reveals bottom-up structural emergence in AI societies and provides both theoretical foundations and empirical evidence for understanding and guiding human-AI co-evolution.

Technology Category

Application Category

📝 Abstract
Large language models have been widely used to simulate credible human social behaviors. However, it remains unclear whether these models can demonstrate stable capacities for stance formation and identity negotiation in complex interactions, as well as how they respond to human interventions. We propose a computational multi-agent society experiment framework that integrates generative agent-based modeling with virtual ethnographic methods to investigate how group stance differentiation and social boundary formation emerge in human-agent hybrid societies. Across three studies, we find that agents exhibit endogenous stances, independent of their preset identities, and display distinct tonal preferences and response patterns to different discourse strategies. Furthermore, through language interaction, agents actively dismantle existing identity-based power structures and reconstruct self-organized community boundaries based on these stances. Our findings suggest that preset identities do not rigidly determine the agents' social structures. For human researchers to effectively intervene in collective cognition, attention must be paid to the endogenous mechanisms and interactional dynamics within the agents' language networks. These insights provide a theoretical foundation for using generative AI in modeling group social dynamics and studying human-agent collaboration.
Problem

Research questions and friction points this paper is trying to address.

Investigating agents' stance formation in hybrid societies
Examining response to human interventions in interactions
Understanding endogenous mechanisms in collective cognition dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative agent-based modeling with virtual ethnography
Agents exhibit endogenous stances beyond preset identities
Language interaction dismantles power structures and rebuilds boundaries
🔎 Similar Papers
No similar papers found.