🤖 AI Summary
This work identifies a systematic deficiency in vision-language models (VLMs) regarding the understanding of nonverbal social cues—such as emotion and social dynamics—and introduces the novel task of *visual social-pragmatic reasoning*. To address the absence of dedicated evaluation resources, we construct the first high-quality, fine-grained benchmark dataset, integrating multimodal prompt engineering with controllable image–text alignment techniques. Systematic evaluation of state-of-the-art VLMs on this benchmark reveals a substantial performance gap relative to human capabilities in social-pragmatic inference. We formally define and quantitatively measure this disparity as the *visual social-pragmatic reasoning gap*. Our study not only fills a critical void in evaluating VLMs’ social cognition but also provides a reproducible assessment framework and concrete directions for enhancing model understanding of real-world social interactions.
📝 Abstract
Understanding human social behavior such as recognizing emotions and the social dynamics causing them is an important and challenging problem. While LLMs have made remarkable advances, they are limited to the textual domain and cannot account for the major role that non-verbal cues play in understanding social situations. Vision Language Models (VLMs) can potentially account for this gap, however their ability to make correct inferences over such social cues has received little attention. In this paper, we explore the capabilities of VLMs at social reasoning. We identify a previously overlooked limitation in VLMs: the Visual Social-Pragmatic Inference gap. To target this gap, we propose a new task for VLMs: Visual Social-Pragmatic Inference. We construct a high quality dataset to test the abilities of a VLM for this task and benchmark the performance of several VLMs on it.