Neural Synchrony Between Socially Interacting Language Models

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like “social cognition” during social interactions, focusing on whether neural synchrony—a phenomenon observed in human brains during shared understanding—emerges in their internal representations. To this end, we construct a multi-agent social simulation environment and introduce the concept of neural synchrony into LLM research for the first time. By analyzing temporally aligned hidden-layer representations, measuring representational similarity, and evaluating performance on social tasks, we propose representational synchrony as a novel metric for assessing model sociality. Our experiments demonstrate that LLMs indeed exhibit significant representational synchrony during interaction, and this synchrony strongly correlates with their performance on social tasks, offering a new perspective for understanding and evaluating the social capabilities of artificial intelligence.

Technology Category

Application Category

📝 Abstract
Neuroscience has uncovered a fundamental mechanism of our social nature: human brain activity becomes synchronized with others in many social contexts involving interaction. Traditionally, social minds have been regarded as an exclusive property of living beings. Although large language models (LLMs) are widely accepted as powerful approximations of human behavior, with multi-LLM system being extensively explored to enhance their capabilities, it remains controversial whether they can be meaningfully compared to human social minds. In this work, we explore neural synchrony between socially interacting LLMs as an empirical evidence for this debate. Specifically, we introduce neural synchrony during social simulations as a novel proxy for analyzing the sociality of LLMs at the representational level. Through carefully designed experiments, we demonstrate that it reliably reflects both social engagement and temporal alignment in their interactions. Our findings indicate that neural synchrony between LLMs is strongly correlated with their social performance, highlighting an important link between neural synchrony and the social behaviors of LLMs. Our work offers a new perspective to examine the"social minds"of LLMs, highlighting surprising parallels in the internal dynamics that underlie human and LLM social interaction.
Problem

Research questions and friction points this paper is trying to address.

neural synchrony
large language models
social interaction
social minds
representational alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural synchrony
large language models
social interaction
representational alignment
multi-agent systems
🔎 Similar Papers
No similar papers found.