Video-Only ToM: Enhancing Theory of Mind in Multimodal Large Language Models

πŸ“… 2026-03-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the gap in theory of mind (ToM) evaluation for multimodal large language models, which has predominantly relied on textual inputs and overlooked their ToM capabilities in purely visual contexts, along with associated attention mechanisms and hallucination issues. The authors propose VisionToM, a novel framework that, for the first time, focuses exclusively on video-only ToM tasks. By computing intervention vectors to align visual representations with semantic targets, VisionToM guides attention across hierarchical visual features, thereby reducing reliance on linguistic priors. Integrating hierarchical attention modulation, semantic alignment, and a joint evaluation protocol combining multiple-choice questioning and open-ended generation, the method significantly improves ToM question-answering accuracy on the EgoToM benchmark. It also produces more faithful explanations of agents’ mental states, revealing how hallucinations impair ToM reasoning and advancing alignment between multimodal models and human-like social cognition.

Technology Category

Application Category

πŸ“ Abstract
As large language models (LLMs) continue to advance, there is increasing interest in their ability to infer human mental states and demonstrate a human-like Theory of Mind (ToM). Most existing ToM evaluations, however, are centered on text-based inputs, while scenarios relying solely on visual information receive far less attention. This leaves a gap, since real-world human-AI interaction typically requires multimodal understanding. In addition, many current methods regard the model as a black box and rarely probe how its internal attention behaves in multiple-choice question answering (QA). The impact of LLM hallucinations on such tasks is also underexplored from an interpretability perspective. To address these issues, we introduce VisionToM, a vision-oriented intervention framework designed to strengthen task-aware reasoning. The core idea is to compute intervention vectors that align visual representations with the correct semantic targets, thereby steering the model's attention through different layers of visual features. This guidance reduces the model's reliance on spurious linguistic priors, leading to more reliable multimodal language model (MLLM) outputs and better QA performance. Experiments on the EgoToM benchmark-an egocentric, real-world video dataset for ToM with three multiple-choice QA settings-demonstrate that our method substantially improves the ToM abilities of MLLMs. Furthermore, results on an additional open-ended generation task show that VisionToM enables MLLMs to produce free-form explanations that more accurately capture agents' mental states, pushing machine-human collaboration toward greater alignment.
Problem

Research questions and friction points this paper is trying to address.

Theory of Mind
Multimodal Large Language Models
Video-Only Understanding
Model Interpretability
Hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

VisionToM
Theory of Mind
multimodal large language models
visual intervention
attention steering
πŸ”Ž Similar Papers
No similar papers found.