VIBE: Can a VLM Read the Room?

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a systematic deficiency in vision-language models (VLMs) regarding the understanding of nonverbal social cues—such as emotion and social dynamics—and introduces the novel task of *visual social-pragmatic reasoning*. To address the absence of dedicated evaluation resources, we construct the first high-quality, fine-grained benchmark dataset, integrating multimodal prompt engineering with controllable image–text alignment techniques. Systematic evaluation of state-of-the-art VLMs on this benchmark reveals a substantial performance gap relative to human capabilities in social-pragmatic inference. We formally define and quantitatively measure this disparity as the *visual social-pragmatic reasoning gap*. Our study not only fills a critical void in evaluating VLMs’ social cognition but also provides a reproducible assessment framework and concrete directions for enhancing model understanding of real-world social interactions.

Technology Category

Application Category

📝 Abstract
Understanding human social behavior such as recognizing emotions and the social dynamics causing them is an important and challenging problem. While LLMs have made remarkable advances, they are limited to the textual domain and cannot account for the major role that non-verbal cues play in understanding social situations. Vision Language Models (VLMs) can potentially account for this gap, however their ability to make correct inferences over such social cues has received little attention. In this paper, we explore the capabilities of VLMs at social reasoning. We identify a previously overlooked limitation in VLMs: the Visual Social-Pragmatic Inference gap. To target this gap, we propose a new task for VLMs: Visual Social-Pragmatic Inference. We construct a high quality dataset to test the abilities of a VLM for this task and benchmark the performance of several VLMs on it.
Problem

Research questions and friction points this paper is trying to address.

Understanding human social behavior through non-verbal cues
Assessing VLMs' ability for visual social-pragmatic inference
Addressing the gap in VLMs' social reasoning capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLMs address non-verbal social cues
Introduce Visual Social-Pragmatic Inference task
Benchmark VLMs on new social dataset
T
Tania Chakraborty
Purdue University, West Lafayette, IN, USA
E
Eylon Caplan
Purdue University, West Lafayette, IN, USA
Dan Goldwasser
Dan Goldwasser
Purdue University
natural language processingmachine learning