🤖 AI Summary
When observational decisions are influenced by systematic biases, direct prediction can yield unfair outcomes. This work proposes a novel framework that treats “deserved decisions”—the outcomes individuals merit based on their behavior, effort, or ability—as a latent target variable, shifting the prediction task from biased observations to these normatively justified outcomes. The discrepancy between observed and deserved decisions is leveraged to quantify structural unfairness. Integrating measurement error models with causally interpretable assumptions, the framework establishes nonparametric identification conditions and employs sieve maximum likelihood estimation, influence functions, and sensitivity analysis for robust inference. The approach substantially enhances the alignment between predictive accuracy and goal-oriented fairness.
📝 Abstract
We study fairness in decision-making when the data may encode systematic bias. Existing approaches typically impose fairness constraints while predicting the observed decision, which may itself be unfair. We propose a novel framework for characterising and addressing fairness issues by introducing the notion of desert decision, a latent variable representing the decision an individual rightfully deserves based on their actions, efforts, or abilities. This formulation shifts the prediction target from the potentially biased observed decision to the desert decision. We advocate achieving fair decision-making by predicting the desert decision and assessing unfairness by the discrepancy between desert and observed decisions. We establish nonparametric identification results under causally interpretable assumptions on the fairness of the desert decision and the unfairness mechanism of the observed decision. For estimation, we develop a sieve maximum likelihood estimator for the desert decision rule and an influence-function-based estimator for the degree of unfairness. Sensitivity analysis procedures are further proposed to assess robustness to violations of identifying assumptions. Our framework connects fairness with measurement error models, aligning predictive accuracy with fairness relative to an appropriate target, and providing a structural approach to modelling the unfairness mechanism.