Mind the Motions: Benchmarking Theory-of-Mind in Everyday Body Language

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Theory of Mind (ToM) benchmarks focus narrowly on false-belief and asymmetric-information reasoning, neglecting diverse mental states—such as emotions and intentions—and the critical role of nonverbal cues (NVCs). Method: We introduce Motion2Mind, the first systematic framework for ToM-oriented body-language understanding. It features an expert-validated knowledge base and a fine-grained video dataset annotated with 222 distinct bodily actions and 397 mental states, alongside a novel, interpretable evaluation benchmark mapping NVCs to underlying psychological states. Contribution/Results: Experiments reveal substantial performance gaps between current AI models and humans in both NVC recognition and interpretation of intentions/emotions; models further exhibit pervasive over-interpretation bias. This work bridges a key gap in ToM assessment by incorporating the nonverbal dimension, establishing a new paradigm and robust evaluation infrastructure for embodied intelligence’s mental-state modeling.

Technology Category

Application Category

📝 Abstract
Our ability to interpret others' mental states through nonverbal cues (NVCs) is fundamental to our survival and social cohesion. While existing Theory of Mind (ToM) benchmarks have primarily focused on false-belief tasks and reasoning with asymmetric information, they overlook other mental states beyond belief and the rich tapestry of human nonverbal communication. We present Motion2Mind, a framework for evaluating the ToM capabilities of machines in interpreting NVCs. Leveraging an expert-curated body-language reference as a proxy knowledge base, we build Motion2Mind, a carefully curated video dataset with fine-grained nonverbal cue annotations paired with manually verified psychological interpretations. It encompasses 222 types of nonverbal cues and 397 mind states. Our evaluation reveals that current AI systems struggle significantly with NVC interpretation, exhibiting not only a substantial performance gap in Detection, as well as patterns of over-interpretation in Explanation compared to human annotators.
Problem

Research questions and friction points this paper is trying to address.

Evaluating machine Theory-of-Mind capabilities in interpreting nonverbal cues
Addressing limitations of existing benchmarks beyond false-belief tasks
Assessing AI performance gaps in detecting and explaining body language
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-curated body-language reference as knowledge base
Fine-grained video dataset with nonverbal cue annotations
Evaluates AI interpretation of 222 cues and 397 mental states
🔎 Similar Papers
No similar papers found.
S
Seungbeen Lee
Yonsei University
J
Jinhong Jeong
Yonsei University
D
Donghyun Kim
Yonsei University
Yejin Son
Yejin Son
Yonsei University
Machine LearningDeep Learning
Y
Youngjae Yu
Seoul National University