Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness

📅 2024-05-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses systemic inequities arising from the intersection of multiple social identities (e.g., race, gender, geography) across health, energy, and housing sectors. We propose the first cross-sectoral intersectional disparity quantification framework grounded in Latent Class Analysis (LCA). Unlike unidimensional fairness assessments, our approach jointly models multidimensional identities and disparities in resource access across domains, integrating heterogeneous data sources—including EVENS and the 2021 UK Census. Validation against official public equity metrics confirms strong statistical significance (p < 0.01). Empirical application to England and Wales reveals both inter-ethnic and intra-ethnic intersectional disparities—previously obscured by aggregate analyses. The framework yields interpretable, actionable quantitative insights, directly supporting fair AI design and evidence-based, targeted policy interventions.

Technology Category

Application Category

📝 Abstract
The growing interest in fair AI development is evident. The ''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities, emphasising the significance of fairness in AI. This is particularly relevant as an increasing number of AI tools are applied to decision-making processes, such as resource allocation and service scheme development, across various sectors such as health, energy, and housing. Therefore, exploring joint inequalities in these sectors is significant and valuable for thoroughly understanding overall inequality and unfairness. This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies among user-defined groups using latent class analysis. These discrepancies can be used to approximate inequality and provide valuable insights to fairness issues. We validate our approach using both proprietary and public datasets, including both EVENS and Census 2021 (England&Wales) datasets, to examine cross-sectoral intersecting discrepancies among different ethnic groups. We also verify the reliability of the quantified discrepancy by conducting a correlation analysis with a government public metric. Our findings reveal significant discrepancies both among minority ethnic groups and between minority ethnic groups and non-minority ethnic groups, emphasising the need for targeted interventions in policy-making processes. Furthermore, we demonstrate how the proposed approach can provide valuable insights into ensuring fairness in machine learning systems.
Problem

Research questions and friction points this paper is trying to address.

Quantify cross-sectoral intersecting discrepancies
Address inequality in AI decision-making
Ensure fairness in machine learning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Class Analysis
Cross-sectoral Discrepancies
Fairness in AI
🔎 Similar Papers
No similar papers found.
Yingfang Yuan
Yingfang Yuan
Heriot-Watt University
Inter/Multi-disciplinary AIDeep LearningGraph Neural NetworkAgent
Kefan Chen
Kefan Chen
Brown University, Meta
Computer VisionDeep Learning
M
Mehdi Rizvi
School of Mathematical and Computer Sciences, Heriot-Watt University
L
Lynne Baillie
School of Mathematical and Computer Sciences, Heriot-Watt University
W
Wei Pang
School of Mathematical and Computer Sciences, Heriot-Watt University