Multi-agent Multi-armed Bandits with Minimum Reward Guarantee Fairness

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of simultaneously maximizing social welfare and ensuring individual fairness in multi-agent multi-armed bandits, where each agent must receive at least a fixed fraction of its maximum attainable expected reward. We introduce the novel “dual-regret” framework, jointly characterizing both social-welfare regret and fairness regret. Based on this, we propose RewardFairUCB—a UCB-based algorithm incorporating explicit fairness constraints and progressive regret decomposition. It achieves $ ilde{O}(T^{1/2})$ social-welfare regret and $ ilde{O}(T^{3/4})$ fairness regret, matching the respective lower bounds $Omega(sqrt{T})$ and $Omega(T^{3/4})$ up to polylogarithmic factors. Our analysis establishes tightness of both regret bounds. Empirical results demonstrate that RewardFairUCB significantly outperforms existing baselines in balancing fairness and efficiency.

Technology Category

Application Category

📝 Abstract
We investigate the problem of maximizing social welfare while ensuring fairness in a multi-agent multi-armed bandit (MA-MAB) setting. In this problem, a centralized decision-maker takes actions over time, generating random rewards for various agents. Our goal is to maximize the sum of expected cumulative rewards, a.k.a. social welfare, while ensuring that each agent receives an expected reward that is at least a constant fraction of the maximum possible expected reward. Our proposed algorithm, RewardFairUCB, leverages the Upper Confidence Bound (UCB) technique to achieve sublinear regret bounds for both fairness and social welfare. The fairness regret measures the positive difference between the minimum reward guarantee and the expected reward of a given policy, whereas the social welfare regret measures the difference between the social welfare of the optimal fair policy and that of the given policy. We show that RewardFairUCB algorithm achieves instance-independent social welfare regret guarantees of $ ilde{O}(T^{1/2})$ and a fairness regret upper bound of $ ilde{O}(T^{3/4})$. We also give the lower bound of $Omega(sqrt{T})$ for both social welfare and fairness regret. We evaluate RewardFairUCB's performance against various baseline and heuristic algorithms using simulated data and real world data, highlighting trade-offs between fairness and social welfare regrets.
Problem

Research questions and friction points this paper is trying to address.

Maximizing social welfare
Ensuring reward fairness
Multi-agent bandit setting
Innovation

Methods, ideas, or system contributions that make the work stand out.

UCB technique for fairness
Sublinear regret bounds
Minimum reward guarantee
🔎 Similar Papers
No similar papers found.
P
Piyushi Manupriya
IIT Hyderabad, Hyderabad, India
H
Himanshu
IIT Hyderabad, Hyderabad, India
S
S. Jagarlapudi
IIT Hyderabad, Hyderabad, India
Ganesh Ghalme
Ganesh Ghalme
Assistant Professor, Department of AI, IIT Hyderabad
game theorymechanism designmachine learning