🤖 AI Summary
This work addresses the failure of conventional fairness methods when deploying a single predictor across subpopulations with heterogeneous predictability. It introduces the first formulation of group fairness as a bargaining problem, proposing relative improvement—the ratio of actual risk reduction to the maximum possible reduction—as a fairness metric, which corresponds to the Kalai-Smorodinsky bargaining solution. This criterion enjoys scale invariance and individual monotonicity, providing an axiomatic foundation for fair learning. By integrating game-theoretic principles, robust optimization, and finite-sample analysis, the paper establishes convergence guarantees for estimating relative improvement under mild conditions, effectively overcoming the limitations of worst-group loss approaches in settings with heterogeneous predictability.
📝 Abstract
When deploying a single predictor across multiple subpopulations, we propose a fundamentally different approach: interpreting group fairness as a bargaining problem among subpopulations. This game-theoretic perspective reveals that existing robust optimization methods such as minimizing worst-group loss or regret correspond to classical bargaining solutions and embody different fairness principles. We propose relative improvement, the ratio of actual risk reduction to potential reduction from a baseline predictor, which recovers the Kalai-Smorodinsky solution. Unlike absolute-scale methods that may not be comparable when groups have different potential predictability, relative improvement provides axiomatic justification including scale invariance and individual monotonicity. We establish finite-sample convergence guarantees under mild conditions.