Beyond Internal Data: Bounding and Estimating Fairness from Incomplete Data

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating AI model fairness is challenging when complete data are unavailable—e.g., only internal institutional samples and external demographic marginals are accessible. Method: This paper proposes a fairness bounding framework for disjoint data settings. It constructs the set of feasible joint distributions consistent with both internal and external marginal constraints, then derives rigorous upper and lower bounds for fairness metrics—including equal opportunity and predictive parity—via distributionally robust optimization and marginal statistical inference. The approach requires no assumptions about the underlying data-generating process or access to raw shared data. Contribution/Results: The method enables compliant bias auditing under privacy-sensitive and data-siloed conditions. Experiments on synthetic, real-world healthcare, and credit datasets demonstrate that it yields tight, reliable, and interpretable fairness bounds—substantially outperforming existing proxy-based evaluation and imputation baselines.

Technology Category

Application Category

📝 Abstract
Ensuring fairness in AI systems is critical, especially in high-stakes domains such as lending, hiring, and healthcare. This urgency is reflected in emerging global regulations that mandate fairness assessments and independent bias audits. However, procuring the necessary complete data for fairness testing remains a significant challenge. In industry settings, legal and privacy concerns restrict the collection of demographic data required to assess group disparities, and auditors face practical and cultural challenges in gaining access to data. In practice, data relevant for fairness testing is often split across separate sources: internal datasets held by institutions with predictive attributes, and external public datasets such as census data containing protected attributes, each providing only partial, marginal information. Our work seeks to leverage such available separate data to estimate model fairness when complete data is inaccessible. We propose utilising the available separate data to estimate a set of feasible joint distributions and then compute the set plausible fairness metrics. Through simulation and real experiments, we demonstrate that we can derive meaningful bounds on fairness metrics and obtain reliable estimates of the true metric. Our results demonstrate that this approach can serve as a practical and effective solution for fairness testing in real-world settings where access to complete data is restricted.
Problem

Research questions and friction points this paper is trying to address.

Estimating AI fairness without complete demographic data
Leveraging separate internal and external datasets for fairness assessment
Bounding fairness metrics when protected attributes are inaccessible
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage separate incomplete data sources
Estimate feasible joint distributions
Compute plausible fairness metrics bounds
🔎 Similar Papers
No similar papers found.
V
Varsha Ramineni
Centre for Artificial Intelligence, University College London
Hossein A. Rahmani
Hossein A. Rahmani
PhD Student, University College London
Natural Language ProcessingInformation RetrievalMachine Learning
Emine Yilmaz
Emine Yilmaz
University College London
Information RetrievalNatural Language ProcessingMachine Learning
D
David Barber
Centre for Artificial Intelligence, University College London