Infusing Theory of Mind into Socially Intelligent LLM Agents

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language model (LLM)-based social agents generally lack Theory of Mind (ToM) capabilities, limiting their social understanding and long-term interactive effectiveness. To address this, we propose ToMAgent (ToMA), the first framework to jointly train ToM modeling with dialogue lookahead, enabling explicit inference of user mental states and strategic multi-step response planning. Methodologically, ToMA integrates prompt-driven mental state generation, ToM-aware state representation, and goal-conditioned dialogue trajectory prediction to jointly optimize relationship maintenance and task completion. Evaluated on the Sotopia benchmark, ToMAgent significantly outperforms all baselines, achieving state-of-the-art performance across goal-directed reasoning, long-term adaptability, and social quality metrics. These results empirically validate the efficacy and scalability of ToM-driven, lookahead-based social decision-making.

Technology Category

Application Category

📝 Abstract
Theory of Mind (ToM)-an understanding of the mental states of others-is a key aspect of human social intelligence, yet, chatbots and LLM-based social agents do not typically integrate it. In this work, we demonstrate that LLMs that explicitly use ToM get better at dialogue, achieving goals more effectively. After showing that simply prompting models to generate mental states between dialogue turns already provides significant benefit, we further introduce ToMAgent (ToMA), a ToM-focused dialogue agent. ToMA is trained by pairing ToM with dialogue lookahead to produce mental states that are maximally useful for achieving dialogue goals. Experiments on the Sotopia interactive social evaluation benchmark demonstrate the effectiveness of our method over a range of baselines. Comprehensive analysis shows that ToMA exhibits more strategic, goal-oriented reasoning behaviors, which enable long-horizon adaptation, while maintaining better relationships with their partners. Our results suggest a step forward in integrating ToM for building socially intelligent LLM agents.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM social intelligence through Theory of Mind
Improving dialogue goal achievement with mental state modeling
Developing strategic reasoning for long-horizon social adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicitly using Theory of Mind in LLMs
Training agents with ToM and dialogue lookahead
Generating mental states between dialogue turns