LLMs and their Limited Theory of Mind: Evaluating Mental State Annotations in Situated Dialogue

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the limitations of large language models (LLMs) in inferring shared mental models (SMMs) from team dialogues. Addressing the challenge that existing methods fail to capture individual-level SMM variability, we propose a two-stage SMM consistency evaluation framework: first, leveraging LLMs to simulate human annotators for fine-grained mental state annotation of dialogue utterances; second, applying automated discrepancy detection to quantify systematic biases—particularly in spatial reasoning and prosodic ambiguity resolution—by comparing LLM-generated annotations against human annotations and a gold-standard reference. Our work establishes the first reproducible, task-driven assessment of LLMs’ theory-of-mind capabilities, introduces the inaugural dialogue dataset featuring parallel human–LLM SMM annotations, and empirically uncovers structural blind spots in LLMs’ mental-state inference under complex social contexts.

Technology Category

Application Category

📝 Abstract
What if large language models could not only infer human mindsets but also expose every blind spot in team dialogue such as discrepancies in the team members' joint understanding? We present a novel, two-step framework that leverages large language models (LLMs) both as human-style annotators of team dialogues to track the team's shared mental models (SMMs) and as automated discrepancy detectors among individuals' mental states. In the first step, an LLM generates annotations by identifying SMM elements within task-oriented dialogues from the Cooperative Remote Search Task (CReST) corpus. Then, a secondary LLM compares these LLM-derived annotations and human annotations against gold-standard labels to detect and characterize divergences. We define an SMM coherence evaluation framework for this use case and apply it to six CReST dialogues, ultimately producing: (1) a dataset of human and LLM annotations; (2) a reproducible evaluation framework for SMM coherence; and (3) an empirical assessment of LLM-based discrepancy detection. Our results reveal that, although LLMs exhibit apparent coherence on straightforward natural-language annotation tasks, they systematically err in scenarios requiring spatial reasoning or disambiguation of prosodic cues.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to annotate mental states in team dialogues
Detecting discrepancies in team members' shared understanding using LLMs
Assessing LLM limitations in spatial reasoning and prosodic disambiguation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-step LLM framework for mental state annotation
Automated discrepancy detection in team dialogues
Spatial reasoning and prosodic disambiguation challenges identified
🔎 Similar Papers
No similar papers found.