Formalising Anti-Discrimination Law in Automated Decision Systems

📅 2024-06-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses algorithmic discrimination arising from machine learning in high-stakes automated decision-making (e.g., credit scoring), aiming to bridge the gap between algorithmic fairness metrics and anti-discrimination legal standards. Methodologically, it constructs a decision-theoretic framework grounded in UK anti-discrimination law, formally embedding legal requirements—including causality and statutory exceptions—into algorithm design for the first time. It introduces a novel fairness metric, *conditional estimation parity*, which jointly incorporates estimation error, data-generating mechanisms, and legal causal criteria. Empirical validation on real-world credit discrimination cases demonstrates that the framework yields interpretable, legally defensible decisions and supports regulatory compliance assessment. The work provides a jurisprudentially rigorous yet technically feasible pathway for AI governance under Commonwealth and European legal regimes. (138 words)

Technology Category

Application Category

📝 Abstract
Algorithmic discrimination is a critical concern as machine learning models are used in high-stakes decision-making in legally protected contexts. Although substantial research on algorithmic bias and discrimination has led to the development of fairness metrics, several critical legal issues remain unaddressed in practice. To address these gaps, we introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom, which has global influence and aligns more closely with European and Commonwealth legal systems. We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process, aligning with legal standards. Through a real-world example based on an algorithmic credit discrimination case, we demonstrate the practical application of our formalism and provide insights for aligning fairness metrics with legal principles. Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.
Problem

Research questions and friction points this paper is trying to address.

Formalizing anti-discrimination law in automated systems
Addressing algorithmic bias in high-stakes decision-making
Aligning fairness metrics with legal standards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel decision-theoretic framework introduced
Conditional estimation parity metric proposed
Bridges machine learning and legal standards
🔎 Similar Papers
No similar papers found.