🤖 AI Summary
This work addresses the limited theory of mind (ToM) capabilities of large language models (LLMs) in multiparty numerical dialogues. We introduce the first ToM benchmark specifically designed for dynamic information tracking, knowledge-state modeling, and participant-centric numerical reasoning. Methodologically, we propose a scalable, rule-guided symbolic-semantic co-generation framework that integrates numerical relation graphs with explicit participant-level knowledge-state annotations, enabling the construction of realistic business dialogue QA pairs featuring false beliefs, information asymmetry, and distractors. Experimental results reveal substantial ToM deficits in state-of-the-art LLMs: they underperform human baselines by 42% in ToM accuracy, particularly in false-belief reasoning, robustness to distractors, and judgment of information sufficiency. This work fills a critical gap in ToM evaluation for multiparty numerical dialogue and establishes a novel paradigm for assessing LLMs’ cognitive reasoning capabilities.
📝 Abstract
Understanding multiparty conversations demands robust Theory of Mind (ToM) capabilities, including the ability to track dynamic information, manage knowledge asymmetries, and distinguish relevant information across extended exchanges. To advance ToM evaluation in such settings, we present a carefully designed scalable methodology for generating high-quality benchmark conversation-question pairs with these characteristics. Using this methodology, we create $ exttt{DIAMONDs}$, a new conversational QA dataset covering common business, financial or other group interactions. In these goal-oriented conversations, participants often have to track certain numerical quantities (say $ extit{expected profit}$) of interest that can be derived from other variable quantities (like $ extit{marketing expenses, expected sales, salary}$, etc.), whose values also change over the course of the conversation. $ exttt{DIAMONDs}$ questions pose simple numerical reasoning problems over such quantities of interest (e.g., $ extit{funds required for charity events, expected company profit next quarter}$, etc.) in the context of the information exchanged in conversations. This allows for precisely evaluating ToM capabilities for carefully tracking and reasoning over participants' knowledge states. Our evaluation of state-of-the-art language models reveals significant challenges in handling participant-centric reasoning, specifically in situations where participants have false beliefs. Models also struggle with conversations containing distractors and show limited ability to identify scenarios with insufficient information. These findings highlight current models' ToM limitations in handling real-world multi-party conversations.