🤖 AI Summary
This paper addresses the neglect of group fairness in conventional multi-armed bandit (MAB) regret definitions—e.g., equitable patient welfare allocation in clinical trials—by proposing a general fairness framework balancing individual and societal welfare. Methodologically, it combines standard Upper Confidence Bound (UCB) with initial uniform exploration, relying solely on the additive Hoeffding inequality and thus naturally accommodating sub-Gaussian reward distributions (e.g., Gaussian). Theoretically, it provides the first proof that this simple strategy achieves near-optimal Nash regret, while unifying a broad class of fairness-aware regrets—including p-mean regret—without imposing restrictive distributional assumptions on rewards, thereby overcoming limitations of prior work. Empirically, the algorithm attains (nearly) optimal regret bounds across multiple fairness metrics, consistently outperforming existing approaches.
📝 Abstract
Regret in stochastic multi-armed bandits traditionally measures the difference between the highest reward and either the arithmetic mean of accumulated rewards or the final reward. These conventional metrics often fail to address fairness among agents receiving rewards, particularly in settings where rewards are distributed across a population, such as patients in clinical trials. To address this, a recent body of work has introduced Nash regret, which evaluates performance via the geometric mean of accumulated rewards, aligning with the Nash social welfare function known for satisfying fairness axioms.
To minimize Nash regret, existing approaches require specialized algorithm designs and strong assumptions, such as multiplicative concentration inequalities and bounded, non-negative rewards, making them unsuitable for even Gaussian reward distributions. We demonstrate that an initial uniform exploration phase followed by a standard Upper Confidence Bound (UCB) algorithm achieves near-optimal Nash regret, while relying only on additive Hoeffding bounds, and naturally extending to sub-Gaussian rewards. Furthermore, we generalize the algorithm to a broad class of fairness metrics called the $p$-mean regret, proving (nearly) optimal regret bounds uniformly across all $p$ values. This is in contrast to prior work, which made extremely restrictive assumptions on the bandit instances and even then achieved suboptimal regret bounds.