π€ AI Summary
Existing bias metrics lack clear semantic interpretation and theoretical rigor for non-binary labels and multiple intersecting sensitive attributes, limiting their applicability in anti-discrimination compliance (e.g., OFCCP employment equity assessments).
Method: This paper proposes Uniform Bias (UB), the first bias metric with globally interpretable semantics, formally derivable foundations, and empirical verifiability. UB employs a formal modeling and algorithmically grounded derivation framework to directly align bias quantification with policy objectives.
Contribution/Results: Evaluated across nine public datasets, UB demonstrates consistency, interpretability, and practical utility. It enables principled bias mitigation, yielding deployable fairness-aware models. UB thus provides policymakers with a theoretically sound and operationally viable tool for bias assessment and intervention in complex, real-world settings involving multi-attribute intersectionality and non-binary outcomes.
π Abstract
The widespread use of machine learning and data-driven algorithms for decision making has been steadily increasing over many years. The areas in which this is happening are diverse: healthcare, employment, finance, education, the legal system to name a few; and the associated negative side effects are being increasingly harmful for society. Negative data emph{bias} is one of those, which tends to result in harmful consequences for specific groups of people. Any mitigation strategy or effective policy that addresses the negative consequences of bias must start with awareness that bias exists, together with a way to understand and quantify it. However, there is a lack of consensus on how to measure data bias and oftentimes the intended meaning is context dependent and not uniform within the research community. The main contributions of our work are: (1) The definition of Uniform Bias (UB), the first bias measure with a clear and simple interpretation in the full range of bias values. (2) A systematic study to characterize the flaws of existing measures in the context of anti employment discrimination rules used by the Office of Federal Contract Compliance Programs, additionally showing how UB solves open problems in this domain. (3) A framework that provides an efficient way to derive a mathematical formula for a bias measure based on an algorithmic specification of bias addition. Our results are experimentally validated using nine publicly available datasets and theoretically analyzed, which provide novel insights about the problem. Based on our approach, we also design a bias mitigation model that might be useful to policymakers.