🤖 AI Summary
Existing MARL fairness research predominantly focuses on workload balancing, neglecting agents’ specialized skills and real-world collaborative requirements—e.g., in healthcare—leading to skill-based overload or task-skill mismatches. To address this, we propose FairSkillMARL, the first framework to formally incorporate skill alignment into the definition of MARL fairness, enabling dual-objective optimization that jointly maximizes workload equity and task-skill compatibility. We introduce MARLHospital, a customizable, energy-constrained medical simulation environment featuring integrated task allocation, skill quantification, and energy-aware scheduling modules. Experiments demonstrate that FairSkillMARL significantly reduces skill-task mismatches compared to workload-only baselines, improving both team collaboration efficiency and fairness. Our work establishes a novel paradigm for high-reliability, skill-aware multi-agent coordination.
📝 Abstract
Fairness in multi-agent reinforcement learning (MARL) is often framed as a workload balance problem, overlooking agent expertise and the structured coordination required in real-world domains. In healthcare, equitable task allocation requires workload balance or expertise alignment to prevent burnout and overuse of highly skilled agents. Workload balance refers to distributing an approximately equal number of subtasks or equalised effort across healthcare workers, regardless of their expertise. We make two contributions to address this problem. First, we propose FairSkillMARL, a framework that defines fairness as the dual objective of workload balance and skill-task alignment. Second, we introduce MARLHospital, a customizable healthcare-inspired environment for modeling team compositions and energy-constrained scheduling impacts on fairness, as no existing simulators are well-suited for this problem. We conducted experiments to compare FairSkillMARL in conjunction with four standard MARL methods, and against two state-of-the-art fairness metrics. Our results suggest that fairness based solely on equal workload might lead to task-skill mismatches and highlight the need for more robust metrics that capture skill-task misalignment. Our work provides tools and a foundation for studying fairness in heterogeneous multi-agent systems where aligning effort with expertise is critical.