SocialEval: Evaluating Social Intelligence of Large Language Models

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of social intelligence (SI) in large language models (LLMs). We introduce SocialEval, the first script-driven, bilingual SI benchmark. Methodologically, we design human-authored, narrative “World Tree” scripts and propose a dual-dimensional evaluation paradigm—outcome-oriented (goal achievement) and process-oriented (interpersonal skill deployment)—complemented by behavioral trajectory analysis, neural activation mapping, and representational geometric analysis for fine-grained capability attribution and neuro-mechanistic investigation. Key contributions include: (1) the first empirical demonstration that LLMs significantly underperform humans in SI, exhibiting excessive prosociality that frequently compromises goal attainment; and (2) discovery of functionally specialized, capability-specific representational subspaces in LLMs—paralleling the modular organization of human social cognitive networks.

Technology Category

Application Category

📝 Abstract
LLMs exhibit promising Social Intelligence (SI) in modeling human behavior, raising the need to evaluate LLMs' SI and their discrepancy with humans. SI equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals. This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation, which existing work fails to address. To this end, we propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts. Each script is structured as a world tree that contains plot lines driven by interpersonal ability, providing a comprehensive view of how LLMs navigate social interactions. Experiments show that LLMs fall behind humans on both SI evaluations, exhibit prosociality, and prefer more positive social behaviors, even if they lead to goal failure. Analysis of LLMs' formed representation space and neuronal activations reveals that LLMs have developed ability-specific functional partitions akin to the human brain.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Social Intelligence (SI) in Large Language Models (LLMs)
Assessing discrepancy between LLMs and humans in social interactions
Developing outcome- and process-oriented SI evaluation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Script-based bilingual SI benchmark
Outcome- and process-oriented evaluation integration
Ability-specific functional partitions analysis
🔎 Similar Papers
No similar papers found.
Jinfeng Zhou
Jinfeng Zhou
Tsinghua University
LLMsSocial Intelligence
Y
Yuxuan Chen
The CoAI Group, DCST, Tsinghua University
Y
Yihan Shi
Harvard University
X
Xuanming Zhang
University of Wisconsin–Madison
Leqi Lei
Leqi Lei
Tsinghua University
Artificial IntelligenceLarge Language ModelsConsciousness
Y
Yi Feng
Beijing Jiaotong University
Z
Zexuan Xiong
The CoAI Group, DCST, Tsinghua University
M
Miao Yan
Peking University
X
Xunzhi Wang
Nankai University
Y
Yaru Cao
Northwest Minzu University
Jianing Yin
Jianing Yin
University of Pennsylvania & Tsinghua University
Human-computer interactionMixed Reality
S
Shuai Wang
Huawei Noah’ Ark Lab
Q
Quanyu Dai
Huawei Noah’ Ark Lab
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning
Hongning Wang
Hongning Wang
Associate Professor, Department of Computer Science and Technology, Tsinghua University
Machine LearningInformation RetrievalLarge Language Models
M
Minlie Huang
The CoAI Group, DCST, Tsinghua University