Entering Real Social World! Benchmarking the Social Intelligence of Large Language Models from a First-person Perspective

📅 2024-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of systematic social intelligence evaluation for large language models (LLMs) in first-person human–AI social interactions. We propose EgoSocialArena, the first comprehensive benchmark framework tailored to this perspective, grounded in three pillars: cognitive intelligence, situational intelligence, and behavioral intelligence. It employs human-centered, multi-dimensional task modeling, authentic interaction scenario design, and human-performance-aligned evaluation to enable fine-grained, quantitative assessment of LLMs’ social capabilities. Experiments span eight state-of-the-art foundation models; results show that even the strongest model, o1-preview, scores 11.0 points lower than human baselines—highlighting critical deficits in behavioral understanding and situational adaptation. This work fills a key gap in behavioral intelligence evaluation and establishes a reproducible, extensible benchmark for advancing socially intelligent AI.

Technology Category

Application Category

📝 Abstract
Social intelligence is built upon three foundational pillars: cognitive intelligence, situational intelligence, and behavioral intelligence. As large language models (LLMs) become increasingly integrated into our social lives, understanding, evaluating, and developing their social intelligence are becoming increasingly important. While multiple existing works have investigated the social intelligence of LLMs, (1) most focus on a specific aspect, and the social intelligence of LLMs has yet to be systematically organized and studied; (2) position LLMs as passive observers from a third-person perspective, such as in Theory of Mind (ToM) tests. Compared to the third-person perspective, ego-centric first-person perspective evaluation can align well with actual LLM-based Agent use scenarios. (3) a lack of comprehensive evaluation of behavioral intelligence, with specific emphasis on incorporating critical human-machine interaction scenarios. In light of this, we present EgoSocialArena, a novel framework grounded in the three pillars of social intelligence: cognitive, situational, and behavioral intelligence, aimed to systematically evaluate the social intelligence of LLMs from a first-person perspective. With EgoSocialArena, we have conducted a comprehensive evaluation of eight prominent foundation models, even the most advanced LLMs like o1-preview lag behind human performance by 11.0 points.
Problem

Research questions and friction points this paper is trying to address.

Social Intelligence
Large Language Models
Human-Computer Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

EgoSocialArena
Social Intelligence Assessment
First-person Interaction Evaluation
🔎 Similar Papers
No similar papers found.
G
Guiyang Hou
College of Computer Science and Technology, Zhejiang University
Wenqi Zhang
Wenqi Zhang
Zhejiang University
Language ModelMultimodal LearningEmbodied Agents
Y
Yongliang Shen
College of Computer Science and Technology, Zhejiang University
Zeqi Tan
Zeqi Tan
College of Computer Science and Technology, Zhejiang University
S
Sihao Shen
Alibaba Group
Weiming Lu
Weiming Lu
Zhejiang University
Natural Language ProcessingLarge Language ModelsAGI