Can Vision Language Models Understand Mimed Actions?

📅 2025-06-17
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates visual-language models’ (VLMs’) ability to understand mimed actions—a low-ambiguity subset of nonverbal communication (NVC). To this end, we introduce MIME, the first dedicated benchmark for evaluating VLMs on mimed actions: it comprises 86 high-fidelity motion-captured video classes, systematically varying actors, backgrounds, and camera viewpoints to assess robust generalization. We adopt a video question-answering (VQA) paradigm to comprehensively evaluate both open- and closed-source VLMs. Results reveal substantial performance gaps between current VLMs and human annotators on MIME, exposing fundamental limitations in modeling gesture semantics and motion intent. This work formally establishes mimed actions as a high-robustness entry point for NVC understanding and proposes a scalable, generative evaluation framework—thereby establishing a new benchmark for embodied interaction and gesture comprehension in VLMs.

Technology Category

Application Category

📝 Abstract
Nonverbal communication (NVC) plays an integral role in human language, but studying NVC in general is challenging because of its broad scope and high variance in interpretation among individuals and cultures. However, mime -- the theatrical technique of suggesting intent using only gesture, expression, and movement -- is a subset of NVC that consists of explicit and embodied actions with much lower human interpretation variance. We argue that a solid understanding of mimed actions is a crucial prerequisite for vision-language models capable of interpreting and commanding more subtle aspects of NVC. Hence, we propose Mime Identification Multimodal Evaluation (MIME), a novel video-based question answering benchmark comprising of 86 mimed actions. Constructed with motion capture data, MIME consists of variations of each action with perturbations applied to the character, background, and viewpoint for evaluating recognition robustness. We find that both open-weight and API-based vision-language models perform significantly worse than humans on MIME, motivating the need for increased research for instilling more robust understanding of human gestures.
Problem

Research questions and friction points this paper is trying to address.

Evaluating vision-language models' understanding of mimed actions
Assessing robustness in recognizing varied mimed action scenarios
Addressing performance gap between models and humans in gesture comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

MIME benchmark for mimed action evaluation
Motion capture data for action variations
Robustness testing with character perturbations
🔎 Similar Papers
No similar papers found.