🤖 AI Summary
This paper addresses systemic inequities arising from the intersection of multiple social identities (e.g., race, gender, geography) across health, energy, and housing sectors. We propose the first cross-sectoral intersectional disparity quantification framework grounded in Latent Class Analysis (LCA). Unlike unidimensional fairness assessments, our approach jointly models multidimensional identities and disparities in resource access across domains, integrating heterogeneous data sources—including EVENS and the 2021 UK Census. Validation against official public equity metrics confirms strong statistical significance (p < 0.01). Empirical application to England and Wales reveals both inter-ethnic and intra-ethnic intersectional disparities—previously obscured by aggregate analyses. The framework yields interpretable, actionable quantitative insights, directly supporting fair AI design and evidence-based, targeted policy interventions.
📝 Abstract
The growing interest in fair AI development is evident. The ''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities, emphasising the significance of fairness in AI. This is particularly relevant as an increasing number of AI tools are applied to decision-making processes, such as resource allocation and service scheme development, across various sectors such as health, energy, and housing. Therefore, exploring joint inequalities in these sectors is significant and valuable for thoroughly understanding overall inequality and unfairness. This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies among user-defined groups using latent class analysis. These discrepancies can be used to approximate inequality and provide valuable insights to fairness issues. We validate our approach using both proprietary and public datasets, including both EVENS and Census 2021 (England&Wales) datasets, to examine cross-sectoral intersecting discrepancies among different ethnic groups. We also verify the reliability of the quantified discrepancy by conducting a correlation analysis with a government public metric. Our findings reveal significant discrepancies both among minority ethnic groups and between minority ethnic groups and non-minority ethnic groups, emphasising the need for targeted interventions in policy-making processes. Furthermore, we demonstrate how the proposed approach can provide valuable insights into ensuring fairness in machine learning systems.