🤖 AI Summary
AI fairness frameworks struggle to satisfy multiple quantitative constraints simultaneously within diverse sociolegal contexts. Method: This paper introduces the “corrective bias” framework, transcending statistical fairness paradigms by integrating Rawls’s difference principle and Dworkin’s theory of equality, augmented by normative ethical analysis, social empirics (e.g., affirmative action evaluations), interdisciplinary modeling, and iterative impact assessment—emphasizing contextualization and human-AI value alignment. Contribution/Results: We propose the first systematic five-step fair governance cycle—identification, definition, intervention, measurement, and iteration—grounded in structural injustice detection, protected-group delineation, and corrective interventions. This cycle offers policymakers and engineers an operationally viable pathway that balances mathematical rigor with justice sensitivity, enabling context-aware, ethically grounded AI fairness governance.
📝 Abstract
Defining fairness in AI remains a persistent challenge, largely due to its deeply context-dependent nature and the lack of a universal definition. While numerous mathematical formulations of fairness exist, they sometimes conflict with one another and diverge from social, economic, and legal understandings of justice. Traditional quantitative definitions primarily focus on statistical comparisons, but they often fail to simultaneously satisfy multiple fairness constraints. Drawing on philosophical theories (Rawls' Difference Principle and Dworkin's theory of equality) and empirical evidence supporting affirmative action, we argue that fairness sometimes necessitates deliberate, context-aware preferential treatment of historically marginalized groups. Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases to promote genuine equality of opportunity. Our approach involves identifying unfairness, recognizing protected groups/individuals, applying corrective strategies, measuring impact, and iterating improvements. By bridging mathematical precision with ethical and contextual considerations, we advocate for an AI fairness paradigm that goes beyond neutrality to actively advance social justice.