🤖 AI Summary
This paper addresses the challenge of formalizing and efficiently computing Theory of Mind (ToM) in multi-agent systems. We propose a computationally tractable game-theoretic framework that embeds recursive belief reasoning within bounded-rational game models, leveraging hierarchical belief representations, statistical inference, and approximate equilibrium computation to ensure expressive yet feasible modeling of mental states—goals, intentions, and beliefs. Our key contributions are threefold: (1) the first unified formalization integrating ToM’s semantic hierarchy with strategic game spaces; (2) support for arbitrary-order recursive mental reasoning; and (3) scalable, verifiable, cognition-driven decision-making in complex social interactions. Empirical evaluation on canonical multi-agent tasks demonstrates that our framework achieves both high inferential accuracy and real-time responsiveness. It thus establishes a novel paradigm for explainable AI and human–agent collaboration.
📝 Abstract
Originating in psychology, $ extit{Theory of Mind}$ (ToM) has attracted significant attention across multiple research communities, especially logic, economics, and robotics. Most psychological work does not aim at formalizing those central concepts, namely $ extit{goals}$, $ extit{intentions}$, and $ extit{beliefs}$, to automate a ToM-based computational process, which, by contrast, has been extensively studied by logicians. In this paper, we offer a different perspective by proposing a computational framework viewed through the lens of game theory. On the one hand, the framework prescribes how to make boudedly rational decisions while maintaining a theory of mind about others (and recursively, each of the others holding a theory of mind about the rest); on the other hand, it employs statistical techniques and approximate solutions to retain computability of the inherent computational problem.