Can Large Language Models Adapt to Other Agents In-Context?

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) possess genuine functional theory of mind (fToM)—defined as the capacity to dynamically predict others’ behavior and rationally adapt one’s own strategy for long-term cooperative alignment. Distinct from literal theory of mind (lToM), which relies on static, surface-level reasoning, we formally define and empirically evaluate fToM for the first time. Our method employs game-theoretic, controllable multi-round interactions, integrating strategic partner modeling, prompt engineering, and behavioral trajectory analysis. Results show that while mainstream open-source LLMs perform well in short-term interactions, they systematically deviate from optimal strategies over extended cooperation due to inductive biases—failing to achieve the context-sensitive, asymptotic convergence required by fToM. This work establishes a novel, rigorous paradigm for assessing theory-of-mind capabilities in LLMs and introduces a reproducible benchmark for fToM evaluation.

Technology Category

Application Category

📝 Abstract
As the research community aims to build better AI assistants that are more dynamic and personalized to the diversity of humans that they interact with, there is increased interest in evaluating the theory of mind capabilities of large language models (LLMs). Indeed, several recent studies suggest that LLM theory of mind capabilities are quite impressive, approximating human-level performance. Our paper aims to rebuke this narrative and argues instead that past studies were not directly measuring agent performance, potentially leading to findings that are illusory in nature as a result. We draw a strong distinction between what we call literal theory of mind i.e. measuring the agent's ability to predict the behavior of others and functional theory of mind i.e. adapting to agents in-context based on a rational response to predictions of their behavior. We find that top performing open source LLMs may display strong capabilities in literal theory of mind, depending on how they are prompted, but seem to struggle with functional theory of mind -- even when partner policies are exceedingly simple. Our work serves to highlight the double sided nature of inductive bias in LLMs when adapting to new situations. While this bias can lead to strong performance over limited horizons, it often hinders convergence to optimal long-term behavior.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Theory of Mind
Behavioral Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Cognitive Abilities
Literal vs Functional Theory of Mind
Inductive Biases Impact
🔎 Similar Papers
2024-01-29Conference on Empirical Methods in Natural Language ProcessingCitations: 3