🤖 AI Summary
This paper identifies a fundamental tension between individual fairness and ecosystem-level fairness in competitive settings: even when multiple firms’ classifiers individually satisfy fairness constraints (e.g., Equal Opportunity), their joint deployment can exacerbate system-wide unfairness. Method: We develop the first theoretical model of fairness degradation under algorithmic competition, introducing a dual-dimensional quantitative framework—based on data overlap and model correlation—to characterize fairness erosion. Extensive simulations confirm that fairness loss increases significantly with higher overlap and correlation. Contributions/Results: (1) We formally define and empirically validate the phenomenon of “ecosystem-level fairness degradation”; (2) we prove that improving individual fairness may harm aggregate fairness; and (3) we provide a quantifiable theoretical foundation and actionable intervention pathways for platform regulation and multi-agent algorithmic governance.
📝 Abstract
Algorithmic fairness has emerged as a central issue in ML, and it has become standard practice to adjust ML algorithms so that they will satisfy fairness requirements such as Equal Opportunity. In this paper we consider the effects of adopting such fair classifiers on the overall level of ecosystem fairness. Specifically, we introduce the study of fairness with competing firms, and demonstrate the failure of fair classifiers in yielding fair ecosystems. Our results quantify the loss of fairness in systems, under a variety of conditions, based on classifiers' correlation and the level of their data overlap. We show that even if competing classifiers are individually fair, the ecosystem's outcome may be unfair; and that adjusting biased algorithms to improve their individual fairness may lead to an overall decline in ecosystem fairness. In addition to these theoretical results, we also provide supporting experimental evidence. Together, our model and results provide a novel and essential call for action.