🤖 AI Summary
This paper addresses discrimination in individualized treatment rules (ITRs) arising from sensitive attributes—such as age, sex, and race—by formalizing demographic parity as a fairness criterion within the ITR learning framework for the first time.
Method: We propose a convex surrogate objective that reformulates non-convex fairness constraints as tractable convex quadratic programs, enabling efficient optimization. Our approach integrates fair machine learning, constrained optimization, and individualized causal inference.
Contribution/Results: We establish theoretical guarantees on estimation consistency and risk upper bounds for the learned fair ITRs. Extensive experiments on multiple synthetic and real-world healthcare datasets demonstrate that our method significantly improves cross-group fairness—measured by demographic parity—while preserving treatment efficacy. Moreover, the proposed estimator exhibits statistical consistency and robust generalization across heterogeneous subpopulations.
📝 Abstract
There has been growing interest in developing optimal individualized treatment rules (ITRs) in various fields, such as precision medicine, business decision-making, and social welfare distribution. The application of ITRs within a societal context raises substantial concerns regarding potential discrimination over sensitive attributes such as age, gender, or race. To address this concern directly, we introduce the concept of demographic parity in ITRs. However, estimating an optimal ITR that satisfies the demographic parity requires solving a non-convex constrained optimization problem. To overcome these computational challenges, we employ tailored fairness proxies inspired by demographic parity and transform it into a convex quadratic programming problem. Additionally, we establish the consistency of the proposed estimator and the risk bound. The performance of the proposed method is demonstrated through extensive simulation studies and real data analysis.