Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation

📅 2024-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper reveals a non-monotonic relationship between Theory of Mind (ToM) capability and cooperative performance in multi-agent systems: higher ToM levels do not necessarily improve collaboration outcomes. Method: We propose a stable coalition matching mechanism that leverages ToM heterogeneity by jointly modeling belief alignment and expertise identification. Specifically, we design a cognition-aware agent preference model, extend the Gale–Shapley algorithm to support multi-objective optimization—incorporating belief consistency, capability complementarity, and long-term stability—and introduce a computationally tractable belief alignment metric. Contribution/Results: Experiments across diverse cooperative tasks demonstrate significant improvements in team success rate (+23.6%) and strategic robustness. Our approach establishes the first paradigm for coalition formation that synergistically integrates cognitive modeling with game-theoretic stability guarantees.

Technology Category

Application Category

📝 Abstract
Cognitive abilities, such as Theory of Mind (ToM), play a vital role in facilitating cooperation in human social interactions. However, our study reveals that agents with higher ToM abilities may not necessarily exhibit better cooperative behavior compared to those with lower ToM abilities. To address this challenge, we propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels by explicitly considering belief alignment and specialized abilities when forming coalitions. Our proposed matching algorithm seeks to find stable coalitions that maximize the potential for cooperative behavior and ensure long-term viability. By incorporating cognitive insights into the design of multi-agent systems, our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies that foster cooperation and improve overall system performance.
Problem

Research questions and friction points this paper is trying to address.

Examines how Theory of Mind affects multi-agent cooperation
Proposes stable coalition matching for belief alignment
Enhances cooperation via cognitive-inspired multi-agent design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel matching coalition mechanism for agents
Belief alignment and specialized abilities considered
Stable coalitions maximize cooperative behavior
🔎 Similar Papers
No similar papers found.
J
Jiaqi Shao
The Hong Kong University of Science and Technology, Hong Kong SAR, China
Tianjun Yuan
Tianjun Yuan
Duke University
Machine Learning System
T
Tao Lin
Westlake University, Hangzhou, China
B
Bing Luo
Duke Kunshan University, Suzhou, China