🤖 AI Summary
This work addresses the limited scope of existing theory-of-mind (ToM) evaluations for large language models, which often rely on a single paradigm and fail to capture the full spectrum of human social cognition. To overcome this, the authors propose a comprehensive ToM benchmark inspired by human cognitive theories, encompassing 46 distinct task paradigms and over 8,000 bilingual (Chinese–English) samples, enabling the first multidimensional and structured assessment of ToM capabilities. Through large-scale human annotation, systematic evaluation across 22 prominent language models, and benchmark design grounded in cognitive science, the study reveals significant deficiencies in current models’ performance on complex ToM tasks, highlighting a structural gap between artificial systems and human-like social reasoning mechanisms.
📝 Abstract
Whether Large Language Models (LLMs) truly possess human-like Theory of Mind (ToM) capabilities has garnered increasing attention. However, existing benchmarks remain largely restricted to narrow paradigms like false belief tasks, failing to capture the full spectrum of human cognitive mechanisms. We introduce CogToM, a comprehensive, theoretically grounded benchmark comprising over 8000 bilingual instances across 46 paradigms, validated by 49 human annotator.A systematic evaluation of 22 representative models, including frontier models like GPT-5.1 and Qwen3-Max, reveals significant performance heterogeneities and highlights persistent bottlenecks in specific dimensions. Further analysis based on human cognitive patterns suggests potential divergences between LLM and human cognitive structures. CogToM offers a robust instrument and perspective for investigating the evolving cognitive boundaries of LLMs.