EgoSocial: Benchmarking Proactive Intervention Ability of Omnimodal LLMs via Egocentric Social Interaction Perception

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In first-person AR/VR social scenarios, existing large language models lack social context awareness, leading to inaccurate timing decisions for AI assistant interventions and thus disruptive user experiences. Method: This paper proposes EgoSoD—a novel end-to-end framework for intervention timing prediction. It introduces EgoSocial, the first large-scale dataset comprising 13,500 first-person video–question pairs, and designs a multimodal social reasoning graph that fuses audio-visual cues. The graph is processed via graph neural networks coupled with dynamic interaction modeling to capture evolving social dynamics. Contribution/Results: Experiments demonstrate substantial improvements in social situational awareness: Phi-4 and Gemini 2.5 Pro achieve 45.6% and 9.9% absolute gains in intervention timing accuracy, respectively, and 20.4% and 6.9% improvements in overall social interaction understanding. EgoSoD establishes a foundational technique for natural, context-aware human–AI collaboration in embodied intelligence systems.

Technology Category

Application Category

📝 Abstract
As AR/VR technologies become integral to daily life, there's a growing need for AI that understands human social dynamics from an egocentric perspective. However, current LLMs often lack the social awareness to discern when to intervene as AI assistant. This leads to constant, socially unaware responses that may disrupt natural conversation and negatively impact user focus. To address these limitations, we introduce EgoSocial, a large-scale egocentric dataset with 13,500 social video-question pairs, specifically designed to benchmark intervention in social interaction perception. We also present an in-depth analysis of current omnimodal LLMs (OLLMs) to assess their effectiveness in detecting diverse social contextual cues. Experiments show that OLLMs still struggle to detect the intervention timing (14.4% for Gemini 2.5 Pro). We also propose EgoSoD (EgoSocial Detection), an end-to-end method for robustly discerning social dynamics. Informed by our OLLM analysis, EgoSoD integrates multimodal contextual cues (e.g., audio and visual cues) into a social thinking graph, dynamically modeling participants and interactions. Our method proactively detects intervention timing and social interactions, precisely determining when to intervene. Our EgoSoD improves Phi-4 by 45.6% and Gemini 2.5 Pro by 9.9% on Intervention Timing performance, and improves Phi-4 by 20.4% and Gemini 2.5 Pro by 6.9% on overall Social Interaction performance. We will release the dataset and code soon.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking AI intervention timing in social interactions
Detecting multimodal cues for proactive AI assistance
Improving AI social awareness through egocentric perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces EgoSocial dataset for benchmarking intervention timing
Proposes EgoSoD method integrating multimodal contextual cues
Dynamically models social interactions via social thinking graph
🔎 Similar Papers
No similar papers found.