Partial Identification Approach to Counterfactual Fairness Assessment

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fairness evaluation of AI decision systems faces the challenge of non-identifiability of counterfactual metrics: sensitive attributes (e.g., race, age) are inherently unmanipulable, rendering causal effects non-identifiable from observational data alone. Method: We propose a Bayesian partial identification framework that integrates causal inference and counterfactual modeling to derive confidence bounds for counterfactual fairness measures—without requiring strong structural assumptions. Contribution/Results: Our approach mitigates identification uncertainty and provides, for the first time, statistically rigorous, high-confidence quantitative bounds for otherwise non-identifiable fairness metrics. Empirical evaluation on the COMPAS dataset reveals a significant positive spurious effect when race is set to “Black,” and a negative direct causal effect of increasing age. This work establishes a theoretically grounded, operationally viable paradigm for fairness assessment of black-box algorithms.

Technology Category

Application Category

📝 Abstract
The wide adoption of AI decision-making systems in critical domains such as criminal justice, loan approval, and hiring processes has heightened concerns about algorithmic fairness. As we often only have access to the output of algorithms without insights into their internal mechanisms, it was natural to examine how decisions would alter when auxiliary sensitive attributes (such as race) change. This led the research community to come up with counterfactual fairness measures, but how to evaluate the measure from available data remains a challenging task. In many practical applications, the target counterfactual measure is not identifiable, i.e., it cannot be uniquely determined from the combination of quantitative data and qualitative knowledge. This paper addresses this challenge using partial identification, which derives informative bounds over counterfactual fairness measures from observational data. We introduce a Bayesian approach to bound unknown counterfactual fairness measures with high confidence. We demonstrate our algorithm on the COMPAS dataset, examining fairness in recidivism risk scores with respect to race, age, and sex. Our results reveal a positive (spurious) effect on the COMPAS score when changing race to African-American (from all others) and a negative (direct causal) effect when transitioning from young to old age.
Problem

Research questions and friction points this paper is trying to address.

Assessing counterfactual fairness in AI systems with unidentifiable measures
Deriving bounds for fairness metrics using partial identification approach
Evaluating racial and age bias in COMPAS recidivism scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partial identification bounds counterfactual fairness measures
Bayesian approach estimates fairness bounds confidently
Algorithm validated on COMPAS dataset race age sex
🔎 Similar Papers
No similar papers found.