Nucleolus Credit Assignment for Effective Coalitions in Multi-agent Reinforcement Learning

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In cooperative multi-agent reinforcement learning (MARL), monolithic global coalition formation leads to inaccurate credit assignment, poor task decomposition capability, and suboptimal performance. To address this, this paper introduces the *nucleolus*—a solution concept from cooperative game theory—into MARL credit assignment for the first time, proposing the Nucleolus Q-Learning framework. Our method automatically identifies stable and efficient small-scale subcoalitions via the nucleolus solution, enabling interpretable subtask decomposition while providing theoretical guarantees on convergence and stability. Evaluated on Predator-Prey and StarCraft II benchmarks across multiple difficulty levels, our approach achieves significant improvements in win rate and cumulative reward—particularly outperforming four state-of-the-art baselines in hard and super-hard scenarios. These results empirically validate the effectiveness and generalizability of multi-subcoalition structures for modeling complex cooperative tasks.

Technology Category

Application Category

📝 Abstract
In cooperative multi-agent reinforcement learning (MARL), agents typically form a single grand coalition based on credit assignment to tackle a composite task, often resulting in suboptimal performance. This paper proposed a nucleolus-based credit assignment grounded in cooperative game theory, enabling the autonomous partitioning of agents into multiple small coalitions that can effectively identify and complete subtasks within a larger composite task. Specifically, our designed nucleolus Q-learning could assign fair credits to each agent, and the nucleolus Q-operator provides theoretical guarantees with interpretability for both learning convergence and the stability of the formed small coalitions. Through experiments on Predator-Prey and StarCraft scenarios across varying difficulty levels, our approach demonstrated the emergence of multiple effective coalitions during MARL training, leading to faster learning and superior performance in terms of win rate and cumulative rewards especially in hard and super-hard environments, compared to four baseline methods. Our nucleolus-based credit assignment showed the promise for complex composite tasks requiring effective subteams of agents.
Problem

Research questions and friction points this paper is trying to address.

Improves coalition formation in multi-agent reinforcement learning.
Enables autonomous agent partitioning for subtask completion.
Ensures fair credit assignment and coalition stability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nucleolus-based credit assignment in MARL
Autonomous agent coalition partitioning
Nucleolus Q-learning with theoretical guarantees
🔎 Similar Papers
No similar papers found.
Y
Yugu Li
University of South Australia, Adelaide, Australia
Z
Zehong Cao
University of South Australia, Adelaide, Australia
Jianglin Qiao
Jianglin Qiao
University of South Australia
Artifical Intelligence
Siyi Hu
Siyi Hu
Adelaide University
Generative AIReinforcement LearningMulti-Agent Systems