Adaptive Theory of Mind for LLM-based Multi-Agent Coordination

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of multi-agent coordination failures caused by mismatched levels of Theory of Mind (ToM) reasoning. The authors propose an Adaptive Theory of Mind (A-ToM) mechanism that enables large language model–based agents to dynamically infer their partners’ ToM levels through historical interactions and adjust their own reasoning depth accordingly. This adaptive alignment allows agents to more accurately predict others’ behaviors and achieve efficient coordination, avoiding both over- and under-reasoning. A-ToM represents the first approach to enable dynamic alignment of ToM levels during interaction. Empirical evaluations across diverse tasks—including repeated matrix games, grid-world navigation, and the Overcooked environment—demonstrate that A-ToM significantly enhances collaborative performance, underscoring its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Theory of Mind (ToM) refers to the ability to reason about others' mental states, and higher-order ToM involves considering that others also possess their own ToM. Equipping large language model (LLM)-driven agents with ToM has long been considered to improve their coordination in multiagent collaborative tasks. However, we find that misaligned ToM orders-mismatches in the depth of ToM reasoning between agents-can lead to insufficient or excessive reasoning about others, thereby impairing their coordination. To address this issue, we design an adaptive ToM (A-ToM) agent, which can align in ToM orders with its partner. Based on prior interactions, the agent estimates the partner's likely ToM order and leverages this estimation to predict the partner's action, thereby facilitating behavioral coordination. We conduct empirical evaluations on four multi-agent coordination tasks: a repeated matrix game, two grid navigation tasks and an Overcooked task. The results validate our findings on ToM alignment and demonstrate the effectiveness of our A-ToM agent. Furthermore, we discuss the generalizability of our A-ToM to non-LLM-based agents, as well as what would diminish the importance of ToM alignment.
Problem

Research questions and friction points this paper is trying to address.

Theory of Mind
multi-agent coordination
ToM alignment
LLM-based agents
reasoning depth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Theory of Mind
multi-agent coordination
ToM alignment
large language models
behavioral prediction
🔎 Similar Papers
No similar papers found.