MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of large language models (LLMs) in accurately inferring implicit intentions, emotions, and beliefs during social reasoning, this paper proposes the first multi-agent framework grounded in psychological metacognitive theory. The framework decomposes the reasoning process into three specialized agents: Theory of Mind (ToM) agents for joint intention-emotion inference, domain-knowledge agents for injecting cultural and ethical constraints, and response-generation agents for consistency verification. Crucially, it systematically integrates metacognitive mechanisms—such as monitoring, regulation, and reflection—into the multi-agent architecture, thereby enhancing situational plausibility, social appropriateness, and individual adaptability. Experiments demonstrate state-of-the-art performance across three canonical ToM benchmarks; a 35.7% improvement in real-world social scenario accuracy; a 6.2% gain in ToM reasoning accuracy; and, for the first time, human-level performance by LLMs on critical ToM tasks.

Technology Category

Application Category

📝 Abstract
Human social interactions depend on the ability to infer others' unspoken intentions, emotions, and beliefs-a cognitive skill grounded in the psychological concept of Theory of Mind (ToM). While large language models (LLMs) excel in semantic understanding tasks, they struggle with the ambiguity and contextual nuance inherent in human communication. To bridge this gap, we introduce MetaMind, a multi-agent framework inspired by psychological theories of metacognition, designed to emulate human-like social reasoning. MetaMind decomposes social understanding into three collaborative stages: (1) a Theory-of-Mind Agent generates hypotheses user mental states (e.g., intent, emotion), (2) a Domain Agent refines these hypotheses using cultural norms and ethical constraints, and (3) a Response Agent generates contextually appropriate responses while validating alignment with inferred intent. Our framework achieves state-of-the-art performance across three challenging benchmarks, with 35.7% improvement in real-world social scenarios and 6.2% gain in ToM reasoning. Notably, it enables LLMs to match human-level performance on key ToM tasks for the first time. Ablation studies confirm the necessity of all components, which showcase the framework's ability to balance contextual plausibility, social appropriateness, and user adaptation. This work advances AI systems toward human-like social intelligence, with applications in empathetic dialogue and culturally sensitive interactions. Code is available at https://github.com/XMZhangAI/MetaMind.
Problem

Research questions and friction points this paper is trying to address.

Modeling human social thoughts using metacognitive multi-agent systems
Improving LLMs' ability to infer intentions and emotions in communication
Achieving human-level performance in Theory of Mind reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework emulates human social reasoning
Three-stage collaboration enhances Theory of Mind
Balances plausibility, appropriateness, and user adaptation
X
Xuanming Zhang
University of Wisconsin-Madison
Y
Yuxuan Chen
Tsinghua University
Min-Hsuan Yeh
Min-Hsuan Yeh
University of Wisconsin Madison
Natural Language Processing
Y
Yixuan Li
University of Wisconsin-Madison