Understanding Artificial Theory of Mind: Perturbed Tasks and Reasoning in Large Language Models

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) possess robust theory of mind (ToM) capabilities, particularly under perturbed conditions. To this end, the authors construct a high-quality ToM dataset comprising perturbed false-belief tasks accompanied by manually curated, fine-grained reasoning paths, and introduce novel metrics to assess both reasoning correctness and answer faithfulness. Experimental results reveal a significant degradation in ToM performance across all evaluated models under perturbation. While chain-of-thought prompting generally enhances overall performance and yields faithful reasoning traces, it paradoxically reduces accuracy for certain perturbation types. This work provides new methodological tools and empirical evidence for evaluating and understanding the robustness of ToM in large language models.

Technology Category

Application Category

📝 Abstract
Theory of Mind (ToM) refers to an agent's ability to model the internal states of others. Contributing to the debate whether large language models (LLMs) exhibit genuine ToM capabilities, our study investigates their ToM robustness using perturbations on false-belief tasks and examines the potential of Chain-of-Thought prompting (CoT) to enhance performance and explain the LLM's decision. We introduce a handcrafted, richly annotated ToM dataset, including classic and perturbed false belief tasks, the corresponding spaces of valid reasoning chains for correct task completion, subsequent reasoning faithfulness, task solutions, and propose metrics to evaluate reasoning chain correctness and to what extent final answers are faithful to reasoning traces of the generated CoT. We show a steep drop in ToM capabilities under task perturbation for all evaluated LLMs, questioning the notion of any robust form of ToM being present. While CoT prompting improves the ToM performance overall in a faithful manner, it surprisingly degrades accuracy for some perturbation classes, indicating that selective application is necessary.
Problem

Research questions and friction points this paper is trying to address.

Theory of Mind
Large Language Models
False-Belief Tasks
Perturbation
Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theory of Mind
Chain-of-Thought prompting
false-belief tasks
reasoning faithfulness
perturbation analysis
🔎 Similar Papers
No similar papers found.