🤖 AI Summary
This work addresses the susceptibility of individualized decision rules to biases in training data, which can lead to unfair outcomes for groups defined by sensitive attributes such as gender or race. To mitigate this, the authors propose embedding demographic parity (DP) and conditional demographic parity (CDP) constraints directly into an optimal decision-learning framework. They develop an efficient perturbation-based approach that modifies the unconstrained optimal solution to yield a policy balancing utility and fairness, and they establish its statistical convergence rate. Theoretical analysis, supported by simulations and empirical evaluation using the Oregon Health Insurance Experiment, demonstrates that the proposed method strictly satisfies both DP and CDP fairness criteria while maintaining high decision value, offering a practical and scalable solution for fair policy learning.
📝 Abstract
Individualized decision rules (IDRs) have become increasingly prevalent in societal applications such as personalized marketing, healthcare, and public policy design. However, a critical ethical concern arises from the potential discriminatory effects of IDRs trained on biased data. These algorithms may disproportionately harm individuals from minority subgroups defined by sensitive attributes like gender, race, or language. To address this issue, we propose a novel framework that incorporates demographic parity (DP) and conditional demographic parity (CDP) constraints into the estimation of optimal IDRs. We show that the theoretically optimal IDRs under DP and CDP constraints can be obtained by applying perturbations to the unconstrained optimal IDRs, enabling a computationally efficient solution. Theoretically, we derive convergence rates for both policy value and the fairness constraint term. The effectiveness of our methods is illustrated through comprehensive simulation studies and an empirical application to the Oregon Health Insurance Experiment.