A Principled Approach for a New Bias Measure

πŸ“… 2024-05-20
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing bias metrics lack clear semantic interpretation and theoretical rigor for non-binary labels and multiple intersecting sensitive attributes, limiting their applicability in anti-discrimination compliance (e.g., OFCCP employment equity assessments). Method: This paper proposes Uniform Bias (UB), the first bias metric with globally interpretable semantics, formally derivable foundations, and empirical verifiability. UB employs a formal modeling and algorithmically grounded derivation framework to directly align bias quantification with policy objectives. Contribution/Results: Evaluated across nine public datasets, UB demonstrates consistency, interpretability, and practical utility. It enables principled bias mitigation, yielding deployable fairness-aware models. UB thus provides policymakers with a theoretically sound and operationally viable tool for bias assessment and intervention in complex, real-world settings involving multi-attribute intersectionality and non-binary outcomes.

Technology Category

Application Category

πŸ“ Abstract
The widespread use of machine learning and data-driven algorithms for decision making has been steadily increasing over many years. The areas in which this is happening are diverse: healthcare, employment, finance, education, the legal system to name a few; and the associated negative side effects are being increasingly harmful for society. Negative data emph{bias} is one of those, which tends to result in harmful consequences for specific groups of people. Any mitigation strategy or effective policy that addresses the negative consequences of bias must start with awareness that bias exists, together with a way to understand and quantify it. However, there is a lack of consensus on how to measure data bias and oftentimes the intended meaning is context dependent and not uniform within the research community. The main contributions of our work are: (1) The definition of Uniform Bias (UB), the first bias measure with a clear and simple interpretation in the full range of bias values. (2) A systematic study to characterize the flaws of existing measures in the context of anti employment discrimination rules used by the Office of Federal Contract Compliance Programs, additionally showing how UB solves open problems in this domain. (3) A framework that provides an efficient way to derive a mathematical formula for a bias measure based on an algorithmic specification of bias addition. Our results are experimentally validated using nine publicly available datasets and theoretically analyzed, which provide novel insights about the problem. Based on our approach, we also design a bias mitigation model that might be useful to policymakers.
Problem

Research questions and friction points this paper is trying to address.

Mitigate data bias in machine learning decisions
Handle non-binary labels and multiple sensitive attributes
Ensure explainable methods with mathematical correctness guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable bias mitigation with mathematical guarantees
Table discovery for creating unbiased real datasets
Handling non-binary labels and multiple sensitive attributes
πŸ”Ž Similar Papers
No similar papers found.
Bruno Scarone
Bruno Scarone
Northeastern University
Data MiningAlgorithmic Fairness
A
Alfredo Viola
Centro de Investigadores CientΓ­ficos La Comarca, La Floresta, Uruguay
R
Ricardo A. Baeza-Yates
Institute for Experiential AI, Northeastern University, Silicon Valley, USA