🤖 AI Summary
Existing Theory of Mind (ToM) benchmarks focus narrowly on false-belief and asymmetric-information reasoning, neglecting diverse mental states—such as emotions and intentions—and the critical role of nonverbal cues (NVCs). Method: We introduce Motion2Mind, the first systematic framework for ToM-oriented body-language understanding. It features an expert-validated knowledge base and a fine-grained video dataset annotated with 222 distinct bodily actions and 397 mental states, alongside a novel, interpretable evaluation benchmark mapping NVCs to underlying psychological states. Contribution/Results: Experiments reveal substantial performance gaps between current AI models and humans in both NVC recognition and interpretation of intentions/emotions; models further exhibit pervasive over-interpretation bias. This work bridges a key gap in ToM assessment by incorporating the nonverbal dimension, establishing a new paradigm and robust evaluation infrastructure for embodied intelligence’s mental-state modeling.
📝 Abstract
Our ability to interpret others' mental states through nonverbal cues (NVCs) is fundamental to our survival and social cohesion. While existing Theory of Mind (ToM) benchmarks have primarily focused on false-belief tasks and reasoning with asymmetric information, they overlook other mental states beyond belief and the rich tapestry of human nonverbal communication. We present Motion2Mind, a framework for evaluating the ToM capabilities of machines in interpreting NVCs. Leveraging an expert-curated body-language reference as a proxy knowledge base, we build Motion2Mind, a carefully curated video dataset with fine-grained nonverbal cue annotations paired with manually verified psychological interpretations. It encompasses 222 types of nonverbal cues and 397 mind states. Our evaluation reveals that current AI systems struggle significantly with NVC interpretation, exhibiting not only a substantial performance gap in Detection, as well as patterns of over-interpretation in Explanation compared to human annotators.