Benchmarking Mental State Representations in Language Models

📅 2024-06-25
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how large language models (LLMs) internally represent mental states—such as others’ beliefs—to advance understanding of theory of mind (ToM) mechanisms. Addressing the limitation of prior studies, which rely predominantly on behavioral evaluation without probing internal representations, we conduct systematic, multi-dimensional probing experiments: integrating activation analysis, training-free activation steering, and benchmarking across model scales, supervised fine-tuning regimes, and prompting strategies. Our key contributions are threefold: (1) We first demonstrate that prompt-based fine-tuning significantly improves ToM probe performance; (2) We provide the first empirical validation that belief reasoning accuracy can be enhanced solely via activation steering—without any parameter updates or additional training; (3) We reveal that model scale and supervised fine-tuning jointly strengthen others’ belief representation, while exposing pervasive memory interference as a critical confound. These findings establish an interpretable foundation for model alignment and safe reasoning.

Technology Category

Application Category

📝 Abstract
While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.
Problem

Research questions and friction points this paper is trying to address.

How LMs internally represent mental states of self and others
Impact of model size and fine-tuning on belief representations
Strengthening belief representations via targeted activation edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing belief representations across model scales
Using control tasks to eliminate confounds
Editing activations to correct wrong inferences
🔎 Similar Papers
No similar papers found.