🤖 AI Summary
Current large language models (LLMs) exhibit significantly lower performance than humans on Theory of Mind (ToM) tasks, primarily due to insufficient fine-grained modeling of characters’ long-term backgrounds and evolving mental states.
Method: This study systematically validates, for the first time, the critical role of long-range character background in ToM reasoning. We introduce CharToM-QA—the first ToM benchmark integrating full-lifecycle biographical contexts of canonical literary characters (1,035 QA pairs). The dataset features human-annotated, background-augmented questions, validated via controlled experiments with human readers deeply familiar with source texts and cross-model evaluation across state-of-the-art LLMs, including o1.
Contribution/Results: Human accuracy improves markedly with background knowledge, whereas even LLMs exposed to original texts achieve substantially lower ToM accuracy—revealing a fundamental limitation in their ability to integrate global narrative context for mental-state inference.
📝 Abstract
Theory-of-Mind (ToM) is a fundamental psychological capability that allows humans to understand and interpret the mental states of others. Humans infer others' thoughts by integrating causal cues and indirect clues from broad contextual information, often derived from past interactions. In other words, human ToM heavily relies on the understanding about the backgrounds and life stories of others. Unfortunately, this aspect is largely overlooked in existing benchmarks for evaluating machines' ToM capabilities, due to their usage of short narratives without global backgrounds. In this paper, we verify the importance of understanding long personal backgrounds in ToM and assess the performance of LLMs in such realistic evaluation scenarios. To achieve this, we introduce a novel benchmark, CharToM-QA, comprising 1,035 ToM questions based on characters from classic novels. Our human study reveals a significant disparity in performance: the same group of educated participants performs dramatically better when they have read the novels compared to when they have not. In parallel, our experiments on state-of-the-art LLMs, including the very recent o1 model, show that LLMs still perform notably worse than humans, despite that they have seen these stories during pre-training. This highlights the limitations of current LLMs in capturing the nuanced contextual information required for ToM reasoning.